Skip to content
Advertisement

Batch fetching into @ElementCollection

I have following entity with its persistent collection

@Entity
@Table(name = "A")
public class A implements Identifiable<Long> {
   @Id
   private Long id;

    @ElementCollection
    @CollectionTable(name = "B", joinColumns = { @JoinColumn(name = "B_ID") })
    private Collection<B> bList;

    @ElementCollection
    @CollectionTable(name = "C", joinColumns = { @JoinColumn(name = "C_ID") })
    private Collection<C> cList;
}

After loading 10k rows A entities, I want to load its collection as well

// loading A entities
final List<A> aList = getA();
// looping from 10k results
for (final A a : aList) {
   final List<B> bList = a.getB();
   final List<C> cList = a.getC();
}

And select statement generated quite a lot (~10k).

Very poor performance here!

Any idea to work with batch select here?

Advertisement

Answer

I have solved this!

IDEA

Hibernate will take care of the sql statement and mapping to entity list value when using @ElementCollection. That’s comfortable to use but we have a trade off. The more parent results we have, the worse performance we got. If we have 10k records parent, Hibernate will do selecting 10k times to fetch its children relation.

Instead of loading children for every single parent. Create native query to load everything.

we got the results like this:

PARENT_ID    CHILD_ID
1            1
1            2
1            3
2            1
2            2
3            3 

then implementing Hibernate transformer to convert these raw database objects to DTO.

Code example.

Create DTO

public class ADto {
   private long id;
   private Collection<BDto> bList = new HashSet<>();
   
   // Constructor

   public void addChildren(BDto b) {
       bList.add(b);
   }

   //equals and hascode
}

public class BDto {
   private long id;
   
   // Constructor
   
   //equals and hascode
}

And transformer

public class CustomTransformer extends AliasedTupleSubsetResultTransformer {

     private final Map<Long, ADto> result = new HashMap<>();
     private final Map<String, Integer> aliasIndexes = new HashMap<>();

    @Override
    public List transformList(final List list) {
        return new ArrayList(new LinkedHashSet(list));
    }

    @Override
    public UsableCapacity transformTuple(final Object[] tuple, final String[] aliases) {
        init(aliases);
        final A aEntity = (A) get(tuple, "parent"); // same as alias in DAO layer
        final B bEntity = (B) get(tuple, "child");  // same as alias in DAO layer
        final Long id = aEntity.getId();
        final ADto aDto;
        if (result.containsKey(id)) {
            aDto = result.get(id);
        } else {
            aDto = new ADto(...);
        }
        aDto.addChildren(new BDto(...)); // create BDto instance from BEntity
        result.put(id, aDto);
        return aDto;
    }

    private Object get(final Object[] capacities, final String alias) {
        return capacities[aliasIndexes.get(alias)];
    }

    private void init(final String[] aliases) {
        if (aliasIndexes.isEmpty()) {
            for (int i = 0; i < aliases.length; i++) {
                final String alias = aliases[i];
                aliasIndexes.put(alias, i);
            }
        }
    }
}

DAO layer

final String queryString = "SELECT {parent.*}, {child.*} FROM A parent LEFT JOIN B child ON parent.id = child.parent_id";
final NativeQuery query = getCurrentSession().createNativeQuery(queryString)
                                             .addEntity("parent", A.class)
                                             .addEntity("child", B.class);
// Todo
query.setResultTransformer(new CustomTransformer());
return safeList(query);
User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement