@bww00 If you are starting, I think it’s better for you to learn how to work with lists.
You read that file into a list, and then write a filter function with pattern matching.
here’s an example I have just made for you.
it’s up to you in the first 2 clauses of filter/4 to determine what you want to do with malformed files for example when a ~~BOM is not closed and you find another one open. weather you want to discard it or joint to the following ~~BOM data.
One trick that can be useful with this kind of file processing is using Stream.transform to chunk the multiple line sections into lists of maps or whatever. Not sure that’s exactly what is required in this case. Are the many ~~BOM ~~ sections or just one?
If you are interest in speed, you can trade simplicity of implementation for complexity. If the file will fit in memory, slurping the entire file and then parsing the resulting binary is almost always faster.
You can often get orders of magnitude improvement in file parsing speed by using all the tricks available. Elixir often looks really slow compared to languages like Ruby or Python based on straightforward use of File.stream! for parsing.
If you absolutely need parsing speed, one really good trick is to use the Erlang leex library. It allows a limited set of regexp in the parser definitions, so you get the speed of a “real parser” with the flexiblity of using regexp to define the parser.