It is now impossible to talk about AI seriously without talking about LLMs. Whether people love them, fear them, dismiss them, or overhype them, they have already changed how research, engineering, and even daily intellectual work gets done.
My own view is fairly simple: I choose to embrace them. Not in a naive "AGI is here" way, and not in the cynical "it is all just autocomplete" way either. LLMs are powerful tools with very unusual strengths. They compress a large amount of knowledge and pattern recognition into an interface that is easy to query, remix, and iterate with. That changes the economics of thinking and building.
The right response is neither blind worship nor reflexive skepticism. It is to use the tools aggressively, understand their limits, and keep updating your priors as the frontier moves.
At the same time, I do not think LLMs remove the need for judgment. If anything, they increase the premium on judgment. Once everyone has access to the same models, the real edge shifts to asking better questions, evaluating answers correctly, designing better systems around the model, and knowing when not to trust the output. The bottleneck moves upward.
This is also why I think the most interesting work in the LLM era is not just prompting. It is the full stack around prompting: data curation, evaluation, retrieval, post-training, tooling, product taste, and the discipline to connect model capability with actual user value. A raw model is impressive; a useful system is much harder.
In research and in industry, I keep coming back to the same lesson: new capabilities arrive faster than stable intuition. So the right response is to stay open-minded, keep learning, and build enough hands-on experience that your opinions are grounded in contact with reality. The field moves too quickly for dogma. What matters most, in the end, is still intuition and taste.