5 AI Project Foundations
Before we begin I think its worth mentioning the master Japanese swordsman Miyamoto Musashi, a Samarai from the 14 hundreds who after defeating 62 men in one on one combat wrote the book on the strategy of battle called The 5 Rings. As a Samarai he believed to be the best fighter you had to be balanced. You had to practice caligraphy, sculpture, poetry, art, teaching and philosophy. He believed you couldn’t have holes in your mental, physical or spiritual game. A popular quote from Miyamoto Musashi states:
“If you know the way broadly, you will see it in all things.”
Why am I discussing a warrior in an article about AI foundations? Because being involved in Artificial Intelligence means that you must be knowledgable in many disciplines to see the bigger picture in a way that allows you to work backwards to success, not just forwards. Client “Here is where we need to be,” You, “Here is how we get there.”
I have a dedicated article for requirements here.
Now lets dive in…
1) Use Cases: Lets start with the BIG challenge - achieving ROI (or ROE Return on Energy) based on the Use Case(s). I am leading with Use Case(s) for this article because it appears as the most forward challenge for a company. If the Use Case is too big, there is too much risk from funding, failure repercussions (someone has to sign off on this), loss of trust from leadership which will put more walls up for future endeavors, etc. Then, there is making the Use Case too small where there is not enough value or business benefit, defined metrics, support or visibility. The budget may not extend beyond the Pilot no matter how successful. It has to hit a sweet spot so lets discuss driving this from the top down. How?
There should be a driving Company Vision or Mantra, like “We are an AI-forward company that delivers X” - “We are defining the next generation of X industrie(s) so that…X”. Without a defined goal or vision (notice this is singular - not plural), there is no plan so no one knows where they will end up. That’s scary. There is also defining the Why? Why are we doing this? Why is this important to you (client) or to us (the company)? I ask this for new projects with the leadership and then ask the real question. Put a number on the Why. The success should have metrics to aim for that align with the Vision. Funding for the long term requires defined metrics of success. Make it big, bold, defined but achievable. Look to ROI framing to really address ROI in a methodical way. How to frame (map) ROI to specific hierarchies of value which we are going to get into below.
Here is the reality of choosing solid use cases. Budgets require justification. The CFO (finance) will want measurable impacts, IT Governance is prioritizes risk and return, other projects will be competing for funding. Without some type of revenue increases, efficiency gains, or risk reductions your AI initiative will move closer to the left - near the trash bin. Go back to Vision.
Why does the Vision have to be so strong? Vision will be imbedded in the culture of the company and be a driving force from leadership down. It is because AI is not an enhancement, upgrade or add on. AI is an operational model shift that will effect everyone at the company and doing business with the company. It requires dedication, grit and consistent focus with clear objectives and milestones. Value may not be achieved until foundations in people, processes or technology are fixed and this takes time and dedication.
Before we leave Use Cases here are the top 5 concerns blocking AI budgets in 2026: (highest concern to lowest)
# 1: Unclear ROI and Financial Discipline
Integration & Infrastructure Complexity
Governance, Risk & Security Concerns
Talent & Skills Shortage
Budget trap - Pilot to Production Funding as discussed above.
Governance & Security: For this and other articles I write I follow the execution implementation challenges I’ve gone through as a Solution Architect. For this article I won’t include Governance & Security as I will be working in unison with the companies CIO, CoE and Governance teams as we design and implement the solution and address risks and concerns consistently through the project. That said you should lead these Governance milestones to make sure all items are addressed so there are no surprises later that may require remodeling of the design or structural models or delays in sign-off on the project or sprint. That will fall to you so take ownership early and drive transparency. Onward.
2) Data Management (quality, quantity, accessability, alignment, traceability, etc). Opening the door to the companies data will reveal projects in their own as they find out that AI is only as good as the data. The gap of getting to “good” may be a tremendous effort. Consider the fact that Data is the number one hidden blocker in AI implementations.
A Data Management Plan for AI is dedicated to output. By defining a plan companies take immediate ownership and accountability. Where are we going? How do we get there? A DMP defines the following:
How data is collected, processed, secured, and governed throughout the AI lifecycle, from ingestion to AI model retirement which maps to technical debt covered below.
“Defines” and “Ensures” data is clean (define clean), compliant (define compliant) and accessible (you get it) , focusing on quality, ethics, and security to drive reliable, scalable AI initiatives. Ethics just to be clear deals with transparency and audit processes for bias detection.
Some of the components of an AI DMP are covered in the next items starting with data identification (where is it?), acquisition (how do we get it?) and preparation (define the gap to good).
For more readings on Data Management go to The 6 Primary Components of Asset Centric Projects.
3) System (Technical) Infrastructure: When I arrive at a clients site one of the first things I do before addressing the requirements, integrations and other moving components is to address where the data lives. This involves an assessment with the output of a visual diagram and context of the clients system infrastructure. I don’t identify only the systems they think we need to address their requirements but ALL systems and where the master data will be. I once experienced a last minute system update that contained large data sets that had not been brought to the teams attention. Is that the clients fault? Nope, I owned it because I did not address all the systems that may have down or upstream impacts to the data we needed to fulfill the lifecycle. After that misstep I have always taken a methodical and diligent process of addressing this step early. No acceptions.
Once we have identified the system of data we figure out its “state” or grade on 2 levels. One is the data in itself and another is how it fits to the AI initiative. While it may be excellent for their current needs, it may not be acceptable for the AI requirements. I usually provide Red,Yellow,Green with context so that the customer understands the assessment of the data and gap analysis to good or Green. The strategy is how we as a team move to address this gap if it exists. Quick example, metadata may not be standardized and descriptions may be missing, there may be too much technical debt that needs to be cleaned up. I worked with multiple IoT systems that alone did their job but when they needed to be moved to the target for actionable data we had transformation on 2 and the rest did not pass the data requirements.
Question: How does the client currently handle technical debt? To be clear, Technical debt is “a software development concept where taking shortcuts or choosing easy, temporary solutions now—often to meet tight deadlines—results in higher, compound "interest" in the form of extra development effort and maintenance later”. Tech Debt will lead to fragility (*vs antifragility), bugs, and low customer satisfaction.
There are many debts: ie.
Code Debt: (messy code) systems become harder to maintain over time resulting in slower costly development later.
Data Quality Debt: Incomplete, inaccurate, unstandardized, or unreliable data that slows actions. Think of the nightmare that is unstandardized metadata.
Architectural Debt: (poor system or modeling designs) Building inflexible models that don’t scale as data volume grows. Boxed models that have ceilings and need restructuring.
So how would you rate a company data system with 70/30% technical debt? What about 80/20%? AI will drive up the data X percent. Better to address these items early and quickly.
Next we address the question of how to position the data for execution be it client or customer. I deal with Field Technicians so I am looking for data that is immediately actionable from positioning knowledge to asset logistics be it parts, tools, skills or certifications. Do we move the data to the target or build integrations to the master systems? Can we even move the data to the target?
There is also the concept of no system of record but we will get to that later.
Let’s get to our next executional component.
4) Integration Infrastructure: I’m going to keep this one short because I have written on integrations multiple times but feel an approach from a project perspective is best here. When I begin a project the teams are tasked with a few deliverables from the FDD (Functional Design Document), The SADD (Solutions Architectural Design Document), and the TDD (Technical Design Document).
In my last project as with all large projects I delivered a high level SADD. Typically they are process driven but sometimes the TDD and SADD are combined as with the latest project. I started with the System diagram with all systems defined for reference from a more detailed excel sheet master. The target system objects are provided in a table below with descriptions and future state designs and system to target mappings.
Below that is the Integration Design model with a current and future state which goes through the versions as we updated different integrations.
If its blocked the system boxes get a RED, The integration lines get a RED. If its in progress (no decision yet), its YELLOW, and if it is signed off - good to go - its GREEN. The document can be complex in nature but it should be very simple to understand from a high level view.
How we track it. At the top of the document you can see a table that shows the date that each model was updated and the version number so you have a reference of truth as the project moves forward and the document reaches maturity which is an eventual sign off my numerous parties. Each image is versioned off the last so teams can reference the most up to date image without chasing people for information.
You may be asking why I am describing a day in the life. Well, this is a foundational discussion and documentation provides the foundation for the build. I have shown up to projects with little documentation or had to chase different people down to find out things that should have been well documented. It’s sloppy and risky to not have some standards for your projects. In some cases I couldn’t even get the documentation because the person had left the company and his laptop was cleaned and passed to the next employee. I’ve done this from scratch more than once so I am putting a request to manage your Clients documentation with the CoE or Product Managers who should be responsible for the sign-off and management of the Implementation documentations. It polite and thoughtful for the next teams coming in so do a good job.
Integrations are complex and there is always a balance of where and how to move the data so that the target system can be actionable for the UI. It really is project by project basis and this will require a dedicated article or 3 but for now, document so the next teams that show up can hit the ground running!
5)