7 practical lessons from over 150 AI projects

The implementation of AI is still a bumpy road in many organizations. Anyone who wants to be successful must look beyond the hype. Experts Benoît Hespel (Proximus ADA) and Dirk Luyckx (Codit) share the 7 most important lessons from over 150 AI projects.
It is now indisputable that AI offers enormous opportunities. The transition to value in practice remains a challenge, however. Many companies encounter obstacles, ranging from issues about data and an unclear ROI, thru’ to resistance to implementation. There is enough theory, however; what is usually missing is real practical experience. Proximus NXT can now present that experience. There have been over 150 AI projects to date, which have given rise to these 7 lessons learned.
“If the expectations of an AI program are not clearly defined, the project will fail before it even starts.”
1. Start with the problem, not the model
“We already saw it in 2018,” says Benoît Hespel, Head of AI at Proximus ADA.
“Everyone wanted to use neural networks, even though they were not the right choice for certain projects.” Technology for technology’s sake is never the best starting point. But that’s just what is happening today with LLMs and AI agents. “Companies don’t want to fall behind. They are eager to ‘do something with AI’, but often don’t depart from a real problem,” says Dirk Luyckx, CTO at Codit. “Then we have to go back to the essence together: what exactly do you want to solve?”
2. Think big, but work with specific measurement points
Ambition is important, of course, as long as you split it into workable steps. “Saying that you want to ‘improve the efficiency of the network’ is too vague,” according to Benoît. “What specifically does that mean? Which KPIs do you want to improve and to what degree?” Without a clear measurement framework, endless iterations take place without decisions. “In addition you must be very strict with yourself. If the expectations are not clearly defined, the project will fail before it even starts.”

3. No value without solid data
“The value of AI doesn’t just come out of thin air,” says Dirk. “You need data, and that must be usable, available and correct.”
All too often, at the start of a project, it turns out that the necessary labels are missing, or that what teams thought they had is unusable in practice. “You have to objectively assess the maturity of your data,” says Benoît. “Without governance, insight into data ownership and privacy, you inevitably get stuck.”
Technical integration also often turns out to be a stumbling block. “Data live in silos,” says Dirk. “You have to break down those silos and let the data work together. Integration requires more than pipelines; it demands consistency, semantic coordination and often also organizational agreements.”
4. Ensure a broad team
AI requires a multidisciplinary approach. “The biggest mistake that organizations make is thinking that it will work with just a data scientist and a data engineer,” according to Benoît. “You also need stakeholders from the business, professional experts, IT architects and analytic translators.” That last group is essential for making the translation between business expectations and technical models.
Dirk picks up on this: “The team must think about the adoption of the technology from day one. Besides building the model, you also have to explain, motivate and guide it. Otherwise you run up against resistance.” AI causes change – and change can only succeed if there is widespread support for it.

5. Build to be scalable, with the ultimate goal in mind
“Developing a proof of concept is one thing,” says Dirk. “Scaling up the solution is much more difficult right away.” Often a model relies on a limited dataset without considering processing in real time, integration or maintenance. “Then a model works well during the test phase, but doesn’t manage to hold its own when it goes live on a larger scale.”
According to Benoît, preparation is the key. “Think from the beginning about your intended final situation. If you want real-time applications, the infrastructure and dataflows must be prepared for it. Otherwise you start again from zero.” Developing a POC with the attitude ‘we’ll see afterwards’ is not the best idea, say the experts. “Work with the larger goal in mind.”
6. Always take account of the ethical implications
Ethics is not an optional step when AI is involved. “AI is fundamentally different from standard IT applications,” says Dirk.
“The outcome of a question to a LLM changes over time, even if the prompt remains unchanged.” Monitoring of the associated risks is therefore a must. “We do that with tools for content safety, bias detection and model drift,” explains Dirk.
The intent behind the use of an AI model plays a role too. Benoît: “You can use the same technology in an ethically acceptable or unacceptable way. Sentiment analysis, for example, is fine for generic trend reports, but you can also use it in a malicious way, for example to assess individual employees. So monitoring is also needed to stay compliant.”
“An AI project doesn’t end with the release; that’s just the beginning.”
7. The evolution of AI never stops
AI systems become outdated. Data change, behavior shifts and performance falls. “And yet a budget for follow-up is seldom provided in an AI program,” says Benoît. “Without monitoring you have no overview of degradation, and without updates the solution doesn’t stay relevant.” Adoption too comes to a standstill if support of the application disappears after the go-live.
The solution? “See support as an extended development phase,” says Dirk. “That way the team remains actively involved, with room for feedback and further optimization. That too is an important lesson. An AI project doesn’t end with the release; that’s really when it begins.” Bear that in mind throughout the process, including the budget.
Conclusion: practice over theory
The greatest value of AI is in the application, but often that’s just where it goes wrong. Technology alone isn’t enough; companies must also include governance, scalability, ethics and adoption in the process. And in particular, anyone who starts with AI must keep learning. “Experience makes the difference,” Benoît concludes. “Not only to prevent mistakes, but also to become better. Every use case fine-tunes your approach.”

Register now
Receive 3 other interesting insights on data integration and AI in your mailbox.
We would love to share inspiring insights from our experts to help you optimally design your own AI journey. Register now and receive four articles featuring our experts in your mailbox.
- Why your AI project is also a data project.
- AI and data integration: an essential symbiosis
- AI-driven security

Benoît Hespel
Benoît Hespel is Head of AI at Proximus ADA, the AI and cybersecurity expertise center of Proximus. He manages and coordinates the development and implementation of AI solutions for internal and external customers.

Dirk Luyckx
Dirk Luyckx is CTO at Codit, one of the subsidiaries of Proximus NXT, and an expert in the design, development and management of data-driven, cloud-native solutions. He has over twenty years’ experience in software development, specializing in Microsoft, cloud and integration.
Latest insights & stories

Cloud repatriation: how do you choose public or private cloud?
Pursuing continuous optimization is a golden rule in IT. The term ‘cloud repatriation’ has emerged in that context. Companies are removing data and workloads from the public cloud and putting them on-prem or in a private cloud.

BWT at ISH 2025: Shaping the revolution of smart and sustainable water solutions
Europe's technology leader for water treatment BWT presented innovative world firsts at the world's leading trade fair ISH, which set new technological standards for buildings of the future

Ensuring health and mobility in Durbuy thanks to 5G
In June 2023, Wallonia’s Digital Agency launched with the Walloon Public Service for Economy, Employment, and Research a call for projects under the Giga Region program.