Step One: Without Good Data, You’ll End Up with Bad AI
The fact is, AI is useless without quality data. Organizations vying to leverage the benefits of AI productivity tools are disappointed when they buy into the hype of these applications and then realize it’s not plug-and-play. Varma’s (2024) research concluded significant gaps in AI adoption due to a lack of systemic ways to tie implementations to organizational objectives. Potential reasons are a deficiency in foundational processes and a lack of comprehensive operational strategies, which are vital to effectively implementing these packaged AI tools designed to enhance customer service chatbots or internal knowledge sharing. Additionally, Fischer & Piskorz-Ryń (2021) and Meroño-Peñuela et al. (2025) add that due to the rapid evolution of AI models and quantity and variety of data, a rigorous data governance strategy and process are paramount if organizations want AI to re-use data to generate new data for decision making and take advantage of future trends of AI. Ultimately, if the data integrity is weak, trust in what AI produces will be questioned, and the tools will not be used for their intended purpose, which results in frustration and negative ROI.
Comprehensive Operational Strategies (What and Why)
These operational strategies are highlighted to communicate the importance of trust in data and its use in AI. Without trust in the data and AI tool, organizations will fail to take advantage of the benefits of current tools and fall behind in the future trends of AI.
Data Strategy:
What: A holistic approach is aligned with organizational objectives and includes governance, architecture, knowledge management, quality, and security. Data governance should also focus on the systems and processes of how data is managed and used.
Why: Data is a strategic asset and the basis of leveraging AI effectively. Data governance is also a success factor in ensuring AI can leverage quality data inside and outside of their organizations for a triangulated multi-modal outcome.
Ethical Considerations:
What: The use of AI has elevated concerns about a lack of transparency, bias, and misinformation.
Why: Bias in algorithms can produce discriminatory outputs, which can be compounded if left unchecked. Organizational leaders driving digital transformation ensure ethical AI produces fair, explainable results, respects human rights, and meets security and compliance requirements (Abbu et al., 2022; Usmani et al., 2023). In addition to implementing processes and algorithms to remove bias, digital leaders must embed a human-centric approach to encourage AI and human collaboration to enhance user experience and empowerment while adhering to ethical standards (Usami et al., 2023).
Organizational Change Strategy:
What: Change is not a one-time event, and the rapid evolution of technologies has not only proven that, but it has also shown that companies able to shift will remain relevant and competitive. Organizational resilience is shown in an organization’s ability to consume, facilitate, and effectively maintain flexibility and resilience during these changes (Pu & Liu, 2023).
Why: Addressing organizational change at a macro level provides the business with guidance and the empowerment to execute change management processes at all levels in the company. By increasing change resilience, an organization improves its change absorption capacity and matures its capabilities to proactively manage change instead of succumbing to it.
While most companies do have these strategies in some form, they may not result in processes or effective execution. To improve an organization's posture when selecting or implementing AI tools, the following foundational processes are recommended.
Foundational Processes (How)
The best thing about processes is that they are meant to continually improve. To keep up with continual change, they should be analyzed for efficiencies and deficiencies. In the context of AI, there are basic processes that should be addressed prior to selecting or implementing new tools. In my field of expertise, the following core processes focus on trust in data and will accelerate the use of general productivity tools and reduce the risks of implementing advanced AI for future use.
Recommended by LinkedIn
Data Governance: Due to the focus on the importance of data governance specific to AI, this author is leveraging the definition provided by Janssen et al. (2020): “Organizations and their personnel defining, applying and monitoring the patterns of rules and authorities for directing the proper functioning of, and ensuring the accountability for, the entire life-cycle of data and algorithms within and across organizations.” The researchers expanded this definition to include data and data processing by AI, since the expectation is for the data to continually change and is a critical factor in informational risks that contribute to costly mistakes and ethical implications.
Knowledge Management: As AI promises to reduce manual processes and accelerate the path to employee optimization, Knowledge Management (KM) is a foundational process that is critical to leveraging and executing AI implementations. Data validity is critical to building trust between AI and humans. Implementing KM processes will help reduce false, outdated, and conflicting information, which may result in bad information produced by AI. Output: KM Playbook addressing how knowledge is acquired, analyzed, developed, and disseminated.
Change Management: This process is a tactical approach to transformation. This methodology is critical in ensuring that people are included in the change so that fear and resistance are minimal. Fear and uncertainty will directly contribute to employee resistance as modeled by most change management frameworks. Creasey (2025) recommended a focused approach on value-based messaging and AI augmentation focused on partnership, not threats, as job redesigns and upskilling will be critical. Additionally, Varma (2024) added that lessons learned from AI projects in one area of the business often cannot be replicated, thereby limiting the repeatability of project implementations. Change management complements and enhances project management in that adoption and resistance are measured to ensure the project becomes embedded in operations.
It should be noted that while ethical considerations are not a specific process, they should be embedded within processes to address ethical issues and oversight to ensure fairness, accountability, and transparency. An additional process that should be included in this list is continuous improvement, which is a component of change resilience and can be embedded in the organizational change strategy.
Comments? Ideas?
References:
Abbu, H., Mugge, P., & Gudergan, G. (2022). Ethical Considerations of Artificial Intelligence: Ensuring Fairness, Transparency, and Explainability. 2022 IEEE 28th International Conference on & Management of Technology (IAMOT) Joint Conference, 31st International Association For, 1–7. https://doi.org/10.1109/ICE/ITMC-IAMOT55089.2022.10033140
Creasey, T. (April 29, 2025). How AI-driven change is different, and what you should do about it. https://www.linkedin.com/feed/update/urn:li:activity:7323021819509145601/
Fischer, B., & Piskorz-Ryń, A. (2021). Artificial intelligence in the context of data governance. International Review of Law, Computers & Technology, 35(3), 419–428. https://doi.org/10.1080/13600869.2021.1950925
Meroño-Peñuela, A., Simperl, E., Kurteva, A., & Reklos, I. (2025). KG.GOV: Knowledge graphs as the backbone of data governance in AI. Journal of Web Semantics, 85. https://doi.org/10.1016/j.websem.2024.100847
Janssen, M., Brous, P., Estevez, E., Barbosa, L. S., & Janowski, T. (2020). Data governance: Organizing data for trustworthy Artificial Intelligence. Government Information Quarterly, 37(3), 101493. https://doi.org/10.1016/j.giq.2020.101493
Usmani, U. A., Happonen, A., & Watada, J. (2023). Human-Centered Artificial Intelligence: Designing for User Empowerment and Ethical Considerations. 2023 5th International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Human-Computer Interaction, Optimization and Robotic Applications (HORA), 2023 5th International Congress On, 1–7. https://doi.org/10.1109/HORA58378.2023.10156761
Varma, T. (2024). "Cognitive Chasms" A Grounded Theory of GenAI Adoption. ProQuest Dissertations & Theses Global. https://www.proquest.com/dissertations-theses/cognitive-chasms-grounded-theory-genai-adoption/docview/3168170483/se-2
Natalie Griego-Pavon, you've nailed some of the most critical issues around adoption of AI. Most enterprises were designed for maximum operational efficiencies, and followed the template cast in the 20th century that essentially focused on creation of local maxima with the implied assumption that if we all took care of the local maxima (i.e., departmental outputs), the global maxima (i.e., the organization throughput, or even the final outcomes) will simply take care of itself! Unfortunately, all it ended up creating were islands with no connecting bridges, and each department further got stuck knee-deep in data swamps that did not talk to rest of the organization, and just focused on minimizing the loss function for their own departments, as the case might be (such as, e.g., reduce the lead time or cost even if the overall quality and customer NPS was going down because, well, that was some other department's problem!). The whole idea of "connected data" has been there before, but sadly, it is not just a question of technology alone, but more of the collective mindset of an organization wherein disparate departments and functions agree that sharing the data and creating common standards of data governance is non-negotiable.