I've been tinkering around with writing compilers again. To that end, I read the (2020) Pika Parsing paper again. A complete dynamic programming algorithm for parsing is seductive, but seems to have some subtle drawbacks which prevent defaulting to it.
Having a lexing step always seems advisable, regardless of the parsing algorithm. It's much easier to reason about tokens than having to think about individual characters. A tool that generates everything from a description can still more easily be written by generating a separate lexer and parser phase.
The biggest drawback is that a lot of potentially unnecessary work happens in the bottom-up approach, because you try to match against things which don't necessarily make sense. When working top-down, you have contextual information about what to try next. Pika instead needs to figure out which of every possible terminal rule a given token matches with, and builds up a tree from there.
Many details of the paper were not properly explained. Maybe this is an opportunity to write about it in a detailed way.