Blind Trust – advanced, basic AI, and Ethical Frameworks

Blind Trust – advanced, basic AI, and Ethical Frameworks

As artificial intelligence (AI) becomes increasingly accessible and its applications continue to evolve, there is a growing tendency among users to rely on these tools for guiding everyday decision-making without critical evaluation.  AI models need ethical frameworks to prevent misinformation, bias, and manipulation, but do they include these safeguards, and do users expect them?  Research from da Silva (2024) concluded that while AI can enhance productivity, there is also an over-reliance and negative impact on critical thinking.  Her research also highlighted the perception of algorithmic bias and data privacy issues in using AI and application tools.

Building relationships and trust in AI should follow the same natural process as humans.  Trust is essential for relationships. To partner with these tools, we need transparency about their ethical framework, data sources, and response generation.  But how does a user know the morality or ethical framework embedded in advanced AI chatbots or basic AI applications?  Well, the answer is different for an application vs. an AI chatbot. 

Basic vs. Advanced AI:

Basic AI applications are limited and considered algorithmic-based tools that are rule-based, and they use AI elements such as data processing, pattern recognition, and automation.

Advanced AI includes the above elements plus machine learning (ML) and adaptability, natural language processing (NLP), and decision-making complexity such as deep learning and adaptive reasoning.

Examples of some ethical frameworks are Utilitarianism, Deontology, Virtue, and Bioethics.  Ethical frameworks for advanced AI chatbots should be part of building their training models.  A good example of this is OpenAI’s Chat GPT.  For ChatGPT, a user can ask about its ethical framework. For basic AI applications, this is not as transparent as its responses are based on fixed criteria and derived from a measurement scale, which can be complicated.  To provide a general overview, this researcher reviewed Yuka (https://yuka.io/en/), an app that gives users a response score to healthy food according to its calculation of what’s good or bad.  Below is a high-level review of both:

Ethical Framework

ChatGPT (advanced AI): Guided by principles of accuracy, fairness, privacy, and harm reduction; strong alignment with bioethics

Yuka (basic AI Application): unknown

Source Data

ChatGPT (advanced AI): Publicly available information, proprietary datasets owned by OpenAI, and Web tools

Yuka (basic AI Application): In January 2018, Yuka decided to create its database.  It also grows from user contributions.

Training Model

ChatGPT (advanced AI): Publicly available text, licensed data, human trainers, and web tools

Yuka (basic AI Application): The rating system is based on Nutri-Score Evaluation and selected articles

Response Output

ChatGPT (advanced AI): Generated responses based on the Training Model

Yuka (basic AI Application): Scores ranging from good to bad (opinions)

Yuka provides a list of articles and the Nutri-Score Evaluation as its basis rating system for generating scores for ingredients within products.  However, the hidden details communicate that the responses are an opinion from Yuka, not a score on the product itself.  Nutritionist Abby Langer (Langer, A. (n.d.) provided an evaluation of the application and its responses and concluded that the primary issues with the scores are that the application weighs individual ingredients which can provide a score that may not align with an individual's health plan and oversimplifies the definition of safe ingredients.  For example, the app gave both cheese and nuts a low score even though both are a good source of protein, calcium, and other vitamins and minerals.  Her overarching point was that foods contain a variety of benefits, and a score does not provide the user with transparency or alignment with their health plan, which can confuse people trying to make healthy choices. 

Generally, reliance on AI and applications has provided users with easier access to information and increased productivity.  The benefit of AI is that users can ask the GPT for its source data and the ethical framework, and it avoids subjective opinions. Instead, it allows users to form their opinions or judgments based on generated responses.  For basic AI applications such as Yuka, it is difficult to assess if there is an ethical framework, limited visibility into its data sources, and most, if not all, users do not double click into how responses are generated and take the outputs as truth, when in fact may be opinion or based on loosely defined scales.

Final verdict: always be curious and don’t assume applications or chatbots have integrated morality guardrails, as systems, much like people, are not perfect.

#ethics #advancedai #doctorate #research #bioethics

 References:

ChatGPT, (n.d.). https://chatgpt.com/

da Silva, A.,F. A. (2024). Critical Thinking and Artificial Intelligence in Education (Order No. 31790603). Available from ProQuest Dissertations & Theses Global. (3143986130). https://www.proquest.com/dissertations-theses/critical-thinking-artificial-intelligence/docview/3143986130/se-2

Langer, Abby. (n.d.) https://abbylangernutrition.com/yuka-app-review-scan-or-scam/

 

To view or add a comment, sign in

More articles by Natalie Griego-Pavon

  • I Think We Took a Wrong Turn...

    During an open forum demo of AI agent capabilities, one participant on the call said, “I don’t care how the agent gets…

    2 Comments
  • Sorry, can't right now...

    Enough about AI transformations, we can’t even get people to follow a new process or use a new system?! The enterprise…

  • What’s Your Response?

    “Between stimulus and response, there is space. In that space is our power to choose our response.

  • You Decide

    I was recently reminded (in my yoga class) that for true transformation to take place you must surrender. Surrender…

    1 Comment
  • Quick Sunday Post: Organizational Structures and AI

    The rush to take advantage of artificial intelligence (AI) forces companies to evaluate how to efficiently optimize…

    3 Comments
  • Step One: Without Good Data, You’ll End Up with Bad AI

    The fact is, AI is useless without quality data. Organizations vying to leverage the benefits of AI productivity tools…

    4 Comments
  • Old Ways to Solve New problems: Mitigating a Third AI Winter

    Yes, there were two other AI winters, one during the Cold War era, and then following up to the early 2000s. During…

    3 Comments
  • Where are all the DEI programs going?

    My 21-year-old informed me that we are now boycotting Target because they have slashed their DEI program. So, I don't…

  • 2025 Leadership Resolutions

    As we kick off a new year, and after a challenging few weeks, I am reminded that rest, reflection, and gratefulness can…

    8 Comments
  • Remember when...

    I lost my dad 22 days ago. This Memorial day is really like any other for my family - a struggle.

Others also viewed

Explore content categories