Parse a large text file using Node.js streams - part 2
Have a look at the part 1 for the necessary context. Here we go.
Question: count the name occurrences
We need to identify the most common first name in the data and how many times it occurs.
Map/reduce is convenient here, the name is in the column 8, so we isolate the column than accumulate te occurrences in an object.
1 | const occurrences = {} |
Question: donations per month
Count how many donations occurred in each month and print out the results. The donation amount is in the column 5.
This time we extract the reduce function for more clarity:
1 | const accumulator = {} |
Considerations
The hardest part of this 2 questions was building the reduce function, which requires some research on Stackoverflow serious knowledge.
The rest is easy, clear, readable, reliable and fast thanks to Streams and Highland.
Conclusions.
A complete working example is in this Github repo.
Would love feedback.
Is the topic interesting? The solution nice? Would you like to read more about Streams or Highland? Or how to avoid lodash and use EcmaScript properly?
I do not have a comment section here but Twitter is perfect.
Have a lovely day.