Skip to content
Advertisement

Java streams collect excel CSV to a list filtering based on the sum of a column

Suppose we have an excel spreadsheet that looks like:

StatusCount  FirstName  LastName  ID
1            Tod        Mahones   122145
0            Tod        Mahones   122145
1            Tod        Mahones   122145
-1           Tod        Mahones   122145
1            Ronny      Jackson   149333
1            Eliza      Cho       351995
-1           Eliza      Cho       351995
1            James      Potter    884214
1            James      Potter    884214
-1           Peter      Walker    900248
1            Zaid       Grits     993213

How can I be able to gather to a list of only the IDs of the people whose status count is a sum greater than 0, and if it is 0 or less then discard it. So in the excel spreadsheet above, the list in java should look like:

List<Integer> = [122145, 149333, 884214, 993213]

Update (adding in what I tried so far):

List<Integer> = csvFile.stream()
                       .map(Arrays::asList)
                       .filter(column -> column.get(0).equalsIgnoreCase("1")
                       .map(column -> column.get(3))
                       .map(Integer::parseInt)
                       .sorted()
                       .collect(Collectors.toList());

I collected them just by status counts of 1 but that isn’t the right process, it should be to sum up the status count for each person or ID (I guess it is good to find any dupes) and if its > 0 then collect to the list, if not then discard.

Update 2: I forgot to mention that the csv file is brought into java as a List<String[]> where the List contains the rows of the csv and the String[] is the contents of the rows, so it would be like:

[[1, Tod, Mahones, 122145],[0, Tod, Mahones, 122145], [1, Tod, Mahones, 122145], ...]

Advertisement

Answer

The following should work:

  1. Create a Map<Integer, Integer> to summarize statuses per ID using Collectors.groupingBy + Collectors.summingInt
  2. Filter entries of the intermediate map and collect keys (IDs) to the list.

If the order of IDs should be maintained as in the input file, a LinkedHashMap::new can be provided as an argument when building the map.

public static List<Integer> getIDs(List<String[]> csvFile) {
    return csvFile.stream()
        .map(Arrays::asList) 
        .collect(Collectors.groupingBy(
            column -> Integer.parseInt(column.get(3)),
            LinkedHashMap::new, // optional argument to maintain insertion order
            Collectors.summingInt(
                column -> Integer.parseInt(column.get(0))
        )))
        .entrySet()
        .stream()
        .filter(e -> e.getValue() > 0)
        .map(Map.Entry::getKey)
        .collect(Collectors.toList());
}

Test

List<String[]> csvFile = Arrays.asList(
    new String[] {"1", "Tod", "Mahones", "122145"},
    new String[] {"0", "Tod", "Mahones", "122145"},
    new String[] {"1", "Tod", "Mahones", "122145"},
    new String[] {"-1", "Tod", "Mahones", "122145"},
    new String[] {"1", "Ronny", "Jackson", "149333"},
    new String[] {"1", "Eliza", "Cho", "351995"},
    new String[] {"-1", "Eliza", "Cho", "351995"},
    new String[] {"1", "James", "Potter", "884214"},
    new String[] {"1", "James", "Potter", "884214"},
    new String[] {"-1", "Peter", "Walker", "900248"},
    new String[] {"1", "Zaid", " Grits", "993213"}
);

System.out.println(getIDs(csvFile));

Output

[122145, 149333, 884214, 993213]
User contributions licensed under: CC BY-SA
9 People found this is helpful
Advertisement