Today I want to share a real-world experience I had while working on a proof of concept. It’s a story about code, local environments, and artificial intelligence — but also about human reasoning and technical awareness. Because yes, an LLM can sound convincing… until you realize it’s leading you in the wrong direction.
The context: a POC in DDEV
I was building a new POC with a fairly standard architecture: Drupal as CMS, MariaDB as database, React for the frontend, and some Python services planned for machine learning and web scraping.
For local development I use DDEV, a CLI that manages Docker containers for Drupal, databases, and related services. It’s a setup I trust and use daily.
The AI suggestion
During a brainstorming session with a Large Language Model (LLM), it suggested: “Why not migrate from MariaDB to PostgreSQL? It’ll make your life easier later, especially for data analysis and Python integration.”
It sounded reasonable, and the model even provided detailed steps. So, I tried.
Hours of attempts, errors, and rebuilds
I changed my DDEV setup to use PostgreSQL: new containers, environment updates, connection tweaks, data imports… and then, a cascade of errors. Missing tools, incomplete dependencies, forgotten details — like needing a ddev delete --omit-snapshot before rebuilding the project.
For hours I followed the LLM’s instructions, shared errors, retried. Every answer was slightly different: “Sorry, I missed this step,” “Ah, you also need to update that file.” An endless loop.
Asking the right question
At some point I stopped and asked a better question: “For this POC, does it even make sense to change the database?”
The answer: “No. It’s not necessary — actually, it’s wrong. Keep MariaDB for Drupal and use PostgreSQL only for your Python services.”
In short: keep your domains separate. Drupal owns its own data; Python can have its own stack and communicate via API (Drupal includes JSON:API natively).
I went back to the same LLM that suggested the migration, and it replied: “You’re absolutely right.”
When AI changes its mind
An LLM doesn’t think, remember, or reason. It calculates. Every answer is probabilistic, based on text and frequency, not architectural logic. Without context, it will give you the most common solution — not the right one.
In my case, the issue wasn’t the code, it was the reasoning. I followed a context-free suggestion and wasted a morning fixing problems that shouldn’t have existed.
The value of human reasoning
LLMs are not infallible, and that’s fine. The real risk is that we, as developers, stop thinking critically about what we’re told.
An algorithm can generate perfect code, but it can’t tell whether that code is needed. It doesn’t know your project goals, deadlines, or architectural constraints.
That’s why every AI suggestion must go through a human reasoning filter: understanding context, purpose, and trade-offs.
Lessons learned
- Context first: a POC doesn’t need perfection; it needs simplicity and clarity.
- Keep domains separated: Drupal (MariaDB) and Python (PostgreSQL) should communicate through APIs, not share a database.
- AI as assistant, not architect: great for exploration, dangerous if followed blindly.
- Always verify: check official docs, forums, and compatibility notes.
Conclusion
This experience reminded me that AI is a powerful tool — but still just a tool. True value lies in human understanding: the ability to think, interpret, and make decisions with awareness.
Use AI to write code. Don’t use it to decide your architecture. Technology evolves fast, but responsibility remains human.
- Log in to post comments
