ExploreDDD 2024 - Panel: The Crucial Intersection of DDD With LLMs
DDD generative AII did three things at the excellent ExploreDDD 2024 conference:
- a condensed version of my Designing microservices: responsibilities, APIs and collaborations workshop
- a presentation on physical design principles for microservices
- a panel on LLMs and DDD
The panel was a lot of fun especially since the conference started with Eric Evan’s keynote on DDDs and LLMs - video and InfoQ summary.
In this post, I’ll describe my thoughts about LLMs - the good and the panel - and echo Eric’s advice about how to handle their uncertain future.
My thoughts about LLMs
There’s a lot’s to say about LLMs but here’s a few thoughts.
LLMs: the good parts
LLMs are a fascinating and useful technology. Previously, I briefly outline common use cases for LLMs, such as text generation, summarization, rewriting, classification, entity extraction, semantic search and classification. And more recently, an interesting Harvard Business review described how people are using generative AI technologies in practice.
LLMs: the bad parts
While LLMs are useful, there are significant risks and challenges. For example:
- We are in a middle of a giant experiment, ie. heading to the peak of inflated expectations of the hype cycle.
- Hallucinations are a significant problem.
- There are significant risks such as illustrated by Air Canada’s LLM chatbot providing misleading advice
- There’s also evidence of LLMs exhibiting racist behavior.
- The issue of plagiarism are still unresolved
- Apparently, exponentially more training data results in a linear performance improvement, which suggests LLMs have significant scalability limits.
- It’s unclear whether LLM powered products and services can be profitable and environmentally sustainable especially if LLMs have to licensing training data.
Here’s an interesting recent article about the uncertain future of Generative AI. A shocking statistic from the article compares AI expenditures with generated revenue:
$50B in, $3B out. That’s obviously not sustainable.
LLMs: when to use them
Hillel Wayne has a great heuristic for applying AI technologies:
Use it on things that are hard to do, easy to check, and easy to fix.
What should we, as developers, do?
If history is any guide, there will be failures and LLMs will go through the trough of disillusionment. Eric’s insightful keynote started with some excellent advice about how to handle the uncertainty:
In other words, we should dive in and start experimenting with LLMs.
I’d add that we should have a healthy skepticism about the technology and its capabilities. We should also be aware of the risks and ethical issues.