Blind Trust – advanced, basic AI, and Ethical Frameworks
As artificial intelligence (AI) becomes increasingly accessible and its applications continue to evolve, there is a growing tendency among users to rely on these tools for guiding everyday decision-making without critical evaluation. AI models need ethical frameworks to prevent misinformation, bias, and manipulation, but do they include these safeguards, and do users expect them? Research from da Silva (2024) concluded that while AI can enhance productivity, there is also an over-reliance and negative impact on critical thinking. Her research also highlighted the perception of algorithmic bias and data privacy issues in using AI and application tools.
Building relationships and trust in AI should follow the same natural process as humans. Trust is essential for relationships. To partner with these tools, we need transparency about their ethical framework, data sources, and response generation. But how does a user know the morality or ethical framework embedded in advanced AI chatbots or basic AI applications? Well, the answer is different for an application vs. an AI chatbot.
Basic vs. Advanced AI:
Basic AI applications are limited and considered algorithmic-based tools that are rule-based, and they use AI elements such as data processing, pattern recognition, and automation.
Advanced AI includes the above elements plus machine learning (ML) and adaptability, natural language processing (NLP), and decision-making complexity such as deep learning and adaptive reasoning.
Examples of some ethical frameworks are Utilitarianism, Deontology, Virtue, and Bioethics. Ethical frameworks for advanced AI chatbots should be part of building their training models. A good example of this is OpenAI’s Chat GPT. For ChatGPT, a user can ask about its ethical framework. For basic AI applications, this is not as transparent as its responses are based on fixed criteria and derived from a measurement scale, which can be complicated. To provide a general overview, this researcher reviewed Yuka (https://yuka.io/en/), an app that gives users a response score to healthy food according to its calculation of what’s good or bad. Below is a high-level review of both:
Ethical Framework
ChatGPT (advanced AI): Guided by principles of accuracy, fairness, privacy, and harm reduction; strong alignment with bioethics
Yuka (basic AI Application): unknown
Source Data
ChatGPT (advanced AI): Publicly available information, proprietary datasets owned by OpenAI, and Web tools
Yuka (basic AI Application): In January 2018, Yuka decided to create its database. It also grows from user contributions.
Training Model
ChatGPT (advanced AI): Publicly available text, licensed data, human trainers, and web tools
Recommended by LinkedIn
Yuka (basic AI Application): The rating system is based on Nutri-Score Evaluation and selected articles
Response Output
ChatGPT (advanced AI): Generated responses based on the Training Model
Yuka (basic AI Application): Scores ranging from good to bad (opinions)
Yuka provides a list of articles and the Nutri-Score Evaluation as its basis rating system for generating scores for ingredients within products. However, the hidden details communicate that the responses are an opinion from Yuka, not a score on the product itself. Nutritionist Abby Langer (Langer, A. (n.d.) provided an evaluation of the application and its responses and concluded that the primary issues with the scores are that the application weighs individual ingredients which can provide a score that may not align with an individual's health plan and oversimplifies the definition of safe ingredients. For example, the app gave both cheese and nuts a low score even though both are a good source of protein, calcium, and other vitamins and minerals. Her overarching point was that foods contain a variety of benefits, and a score does not provide the user with transparency or alignment with their health plan, which can confuse people trying to make healthy choices.
Generally, reliance on AI and applications has provided users with easier access to information and increased productivity. The benefit of AI is that users can ask the GPT for its source data and the ethical framework, and it avoids subjective opinions. Instead, it allows users to form their opinions or judgments based on generated responses. For basic AI applications such as Yuka, it is difficult to assess if there is an ethical framework, limited visibility into its data sources, and most, if not all, users do not double click into how responses are generated and take the outputs as truth, when in fact may be opinion or based on loosely defined scales.
Final verdict: always be curious and don’t assume applications or chatbots have integrated morality guardrails, as systems, much like people, are not perfect.
#ethics #advancedai #doctorate #research #bioethics
References:
ChatGPT, (n.d.). https://chatgpt.com/
da Silva, A.,F. A. (2024). Critical Thinking and Artificial Intelligence in Education (Order No. 31790603). Available from ProQuest Dissertations & Theses Global. (3143986130). https://www.proquest.com/dissertations-theses/critical-thinking-artificial-intelligence/docview/3143986130/se-2
Langer, Abby. (n.d.) https://abbylangernutrition.com/yuka-app-review-scan-or-scam/