Does It Make Sense to Write Tutorials in AI Era?

Marek Semjan

AItutorialslearning

2232 Words •10 Minutes, 8 Seconds

2025-11-22 15:57 +0000


Introduction

It is 2025, and LLMs, such as ChatGPT, Claud, Gemini, and the AI more in general are nothing new anymore. In the recent years, they quickly became wide-spread and most technical people use them in some capacity. For the new developers, who started programming after the release of the LLMs, it is a indispensable tool and they probably would not be able to function without it. And even for the others who use it much less, it is still a very useful tool that can speed up everything from writing simple boilerplate code, auto-completing functions, and even the full implementations of more routine software. Not only that, but the AI can help with learning new technologies by explaining existing code, providing code examples for whatever technology that you might want to learn, fixing the mistakes, and giving you feedback and doing code reviews for you, or at least offering a structured learning road map with timeline and study projects to strengthen the learned concepts, even if you want to take a more traditional approach and learn from non-AI resources.

Since the majority of content on my blog is in a form of tutorials aimed at beginners, I was recently wondering if it still makes sense to focus on such content, and if there is still an interest for such content. In this blog, I try to explore whether it makes sense to keep writing tutorials on technical topics, and whether it may be better to use the LLMs for generating personalized study material instead.

Advantages Of AI Tutorials

Advantages are quiet obvious. You can quickly find the information that is specifically tailored to your level and needs. As long as you can write a good prompt, you will get exactly what you want. No need to scour the internet for hours before finally finding what you needed. Especially for less common questions, or more advanced topics, this may save you a lot of time. In some cases, the tutorials with more advanced examples don’t exist at all, and you would either have to read the open source code, or try to piece together information from various tutorials.

Moreover, you may ask follow-up questions and AI will give you more in-depth explanation, or modify the examples to better suite your needs. While websites with human-written tutorials may have comment sections, there is no guarantee that you will get your questions answered, or whether it will be in a timely manner, while AI will do it almost immediately.

In fact, with a little bit of prompting you can change every aspect of the tutorial generated by the LLMs. Do you want practice problems? Sure, you can have them! Do you want your learning experience to be super interactive experience themed around a magical adventure? Of course, you’ll have it! Do you want your teacher to be master Yoda, or Albus Dumbledore? Why not? Or do you prefer more of a tough love? You can get your tutorial from the LLM pretending to be David Goggins… And once you finish writing your practice problems, you will get a feedback as well, which wouldn’t be possible with traditional resource unless you paid an actual tutor.

And ever since the LLMs got an ability to search web, you will get the code examples based on the current version of the language or library, which might not be the case with a tutorial that was written like 5 years ago, and use an out-dated version of a library that does not follow the current best practices, and maybe contains deprecated functions.

Disadvantages Of Learning With AI

Despite the fact that the LLMs are a very useful tools, they also have various drawbacks. Various errors and hallucinations are the most obvious ones that most users will encounter after a while. This is especially true with less common technologies that are not as wide-spread, and therefore there are less data to train AI models to use and understand these systems. Therefore, it is important for learners to not just blindly trust everything that the AI generates, but actively test the code that it gives, and verify that it compiles and works as intended.

However, even when the AI manages to create a code example that both compiles and fulfills all the functional requirements, it may omit or ignore some important aspects that a beginner may not know or understand. For example, one article reports that security researchers analyzed 5,600 publicly available applications, and managed to find more than 2,000 vulnerabilities, 400+ exposed secrets, and 175 instances of leak of PII (including medical records, IBANs, phone numbers, and emails)1. Since beginners often don’t know about more advanced concepts, such as security, they may accidentally ship an application with various vulnerabilities. And even if there are no security issues with the generated code, there might be other problems with it. For example, the LLM may generate code that does not follow the best practices or is not very optimal, which will hinder one’s ability to learn programming and pick up bad habits.

Another article2 discussed the impact of the LLMs on learning. While it specifically focused on writing essays, I assume that the results can be generalized for other tasks, such as coding. The study found that the group that relied on the LLMs for writing essays suffered from diminished memory, creativity, and critical thinking skills. Compared to the control group, which didn’t use the LLMs, these participants struggled with recalling important information from their essays, exhibited reduced neural connectivity, and they did not feel the ownership of the written text. The concerning part is the discover of the study that the usage of the LLMs may impair development of the essential cognitive skills. From my personal experience, even before the LLMs, when I tried to learn something from an incomplete resource, which forced me to google a lot of information, which led me on a wild hunt for a specific information (and often resulted in having to read a dozen webpages before finding it), I always learned more then when I had everything I needed given to me on a silver platter. Reading the extra information, and then critically deciding whether it is relevant or not, and creating new connections often forced me to learn more than just memorizing only the information I actually needed. I assume that getting answers from the LLM without looking for them is very similar.

While the previous points focused more on the LLMs from the standpoint of the learners, the next point is focused more on the future of AI. Another study3 focused on effects of using the AI-generated data for training the new versions models. The study found out that the output of the models that were trained on the artificial data was less varied and more uniform. Basically, the AI became less creative with higher percentage of the AI-generated data used for training. While this may not be an issue right now, the future generations of the LLMs may become worse at producing unique and creative tutorials, and in my humble opinion, unless the AI companies figure out the way of getting the high-quality data for training in the future, it is very likely that the ever-increasing number of the AI-generated content that is flooding the internet on the daily basis will eventually be the downfall of the current AI boom, and may lead to a significant decrease of the quality of the future models. Of course, the progress in this area of research is rather quick, so maybe I am needlessly pessimistic, and the AI researchers may figure out the way of improving models despite this.

Lastly, I would like to point out that even though the LLMs are free to use for now, this may change in the future. Companies such as OpenAI and Anthropic are first and foremost businesses, and the profits are an important aspect of any successful business. A recent report4 claims that OpenAI lost $12 billion last quarter, with an average cash burn of $1.25 billion over the past two quarters. While Anthropic is doing much better, positioning itself to surpass the ChatGPT maker in the revenue, it is no secret that training and running the LLMs models is expensive, and the AI companies are making high bets on the future. To stay profitable, they may change their business strategy and try to increase the monetization and their incomes by various means, including canceling their generous free tiers. In such a case, those who retained their ability to learn and work without relying too much on the AI will be in advantage over those who did not.

Conclusion

So, should you use the AI to study? And what about writing tutorials by hand without the LLMs?

I think that the answers to both of these questions are not as straightforward as they seem. The LLMs and the AI are tools, and when used responsibly, they may help and speed up learning process, and even make it more fun. On the other hand, relying on the AI too much can be dangerous. That being said, I am not fully opposed to using the AI for learning.

Two months ago, Ryder Carrol, the creator of the bullet journal method, released a video5 where he discusses how he tries to future-proof his thinking in the age of AI without sacrificing the advantages that this novel tool provides. I will take some inspiration from this video and try to give some recommendations for studying specifically.

Firstly, you need to know why you want to study a given topic in the first place? Is it for school? Then you probably have some study recommendation by your teacher. You do not necessarily need study material, just read the one that you were given. I would recommend to use traditional approach, and try to avoid the AI as much as possible. Once you went through the traditional material, you can use the LLMs to test your knowledge (either by generating practice exams or study problems), or to explain the weak concepts. However, you should first try to understand everything by yourself. It is more difficult that way, but such adversity forces growth. If you specifically study programming, and your teacher gives you practice problems to solve, you should also try to solve them by yourself, and not generate answers using the LLMs. If you get stuck, you may specifically ask for hints, but in your prompt you should explicitly forbid the AI from giving you the necessary code. Once you have a solution, you may ask for a code review, and recommendations on how to improve it.

If you are studying to learn something new or advance your career, the knowledge is more important than getting actual results. In such a case, I would recommend to use the LLMs only as guides that will give you very basic recommendations, list of resources, and a road map with the timeline to learn the most important concepts. Moreover, you can use it to generate interactive exercises and projects to strengthen the concepts. However, I would not use the AI for the learning itself. Using code reviews, hints, and explanations might be also useful, if you get stuck, but really try to solve the problems by yourself first.

If you are studying using the AI to achieve a certain goal, and don’t care about learning, then maybe knowledge is not as important. In such a case, just ask yourself whether you would be willing to pay someone to do the job instead of you? What would be the job posting, if you could? Write down the job description, and the detailed specification of what you need. This way you at least get the high-level understanding of the problem, and you can use the specification in your prompt. The extra context will help the LLM generate better answers, and you will still get something out of it, since writing the explanation will force you to think about the problem, and thus provide cognitive training for your brain.

When it comes to writing tutorials, I admit that I am not as sure. I do not think it makes sense to write tutorials for the simple things that can be learned by reading the documentation. On the other hand, detailed tutorials for more complicated topics are not feasible, so I think that in such a case, it would be better to create a technical breakdown that explains the overall architecture and the general concepts, and at the end link the code that the reader may use as a reference when implementing their solution. Or at least this is the strategy that I will choose for my own blog in the future.

Sources 📚️


  1. Hinniger-Foray, N. (2025, October 29). Methodology: 2K+ vulnerabilities in Vibe-Coded apps. Escape DAST - Application Security Blog. https://escape.tech/blog/methodology-how-we-discovered-vulnerabilities-apps-built-with-vibe-coding/ ↩︎

  2. Armitage, R. (2025). Your brain on ChatGPT. British Journal of General Practice, 75(758), 410. https://doi.org/10.3399/bjgp25x743181 ↩︎

  3. Martínez, G., Watson, L., Reviriego, P., Hernández, J. A., Juarez, M., & Sarkar, R. (2024). Towards understanding the interplay of generative artificial intelligence and the internet. In Lecture notes in computer science (pp. 59–73). https://doi.org/10.1007/978-3-031-57963-9_5 ↩︎

  4. Dingco, A. D. (2025, November 7). OpenAI Sees Heavy Losses while Rival Anthropic Projects $70B Sales by 2028. International Business Times. https://www.ibtimes.com/openai-sees-heavy-losses-while-rival-anthropic-projects-70b-sales-2028-3789856 ↩︎

  5. Bullet Journal. (2025, September 13). Future Proof your thinking (to save your mind) [Video]. YouTube. https://www.youtube.com/watch?v=O9vuPjoPaBE ↩︎