Building Just Architectures: Toward a Value-Aware, Auditable, and Participatory Ontology for AI Systems

Abstract

This paper proposes a conceptual framework for aligning policy and AI development with justice through the construction of a value-aware ontology and a system of ethical iteration. Drawing from feminist epistemology, social ontology, and critical AI ethics, it outlines architectural components necessary to bridge AI, policy, and human-centered values. Central to the proposal is a schema for fairness as a multidimensional class, an audit-friendly system of ethical versioning, and participatory protocols for evolving these frameworks over time. By embedding contestability, stakeholder input, and transparency into AI systems from the ground up, this work aims to foster more just and accountable technological futures.

Introduction

Artificial intelligence systems increasingly shape the conditions of life — influencing access to healthcare, credit, employment, and even justice itself. Yet, as critical scholarship has demonstrated (D’Ignazio & Klein, 2020), (West et al., 2019), these systems often reproduce and amplify existing inequalities, embedding biases under the guise of neutral computation. Feminist and postcolonial critiques have shown that many AI systems are not only technical artifacts but moral and political actors.

In response, this paper proposes a foundational ethical architecture for AI and proposals for how to incorporate it into policy — one that integrates a value-aware ontology, participatory design processes, and recursive auditability. The goal is not only to prevent harm but to cultivate AI systems that are reflexively aligned with evolving notions of justice and dignity. We aim to design systems that can explain the ethical assumptions behind actions, be challenged by affected communities, and adapt as society changes.

I explore this architecture through three primary lenses:

  1. A value-aware ontology schema that recognizes the contested nature of fairness;
  2. A protocol for participatory ethical iteration through stakeholder engagement and audit loops;
  3. A framework for ethical versioning — aligning systems with shifting policies, norms, and definitions of “better.”

Foundations and Theoretical Commitments

To build just architectures for AI, we must begin with the epistemological, social, and political assumptions embedded in systems design. This paper draws from feminist epistemology, social ontology, and critical AI ethics to anchor a framework that is not only technically feasible but morally attentive and socially accountable.

Situated Knowledge and Power-Awareness

Feminist epistemology, especially as articulated in Data Feminism (D’Ignazio & Klein, 2020), insists that “who is speaking to whom turns out to be as important for meaning and truth as what is said” (Alcoff, 1991). Knowledge is not produced in a vacuum; it is shaped by positionality, power, and institutional context. As such, AI systems cannot be considered neutral. They consolidate ideas “about who is entitled to exercise power and who is not” and who is rendered invisible or misrepresented (D’Ignazio & Klein, 2020).

D’Ignazio and Klein urge a “commitment to co-liberation,” recognizing that oppressive systems harm all of us and undermine the integrity of our work. This ethic of interdependence informs the architectural goal of this paper: to design AI systems that do not simply mitigate harm but actively foster justice through recognition, redistribution, and representation.

Artificial intelligence systems are not merely technical tools; they are “systems of discrimination: they are classification technologies that differentiate, rank, and categorize” (West et al., 2019). As Sarah Myers West and colleagues show, these tools often encode and justify historical patterns of exclusion — from gender and racial profiling to biased employment screening.

This is not accidental. These systems are built in “spaces that… tend to be extremely white, affluent, technically oriented, and male,” reinforcing existing power asymmetries (West et al., 2019). The resulting harms are multifaceted: not only economic, but representational and political. People are categorized in ways that affect how they are perceived, what opportunities they are afforded, and how their social standing is interpreted by institutions (West et al., 2019).

Efforts to simply “tweak” biased systems are not sufficient. “Locating individual biases… becomes an exercise in futility,” unless we interrogate the social logics these systems operate within — who they benefit, who they harm, and how (West et al., 2019). In other words, ethical system design must extend beyond data inputs and technical audits to a redesign of the institutional and normative architectures themselves.

Meaning in AI systems — like meaning in human language — is not fixed. It is shaped by those who define, encode, and interpret system behavior. Thus, “who gets to speak” in defining fairness, harm, or relevance is not a peripheral issue, but central to ethical design (Alcoff, 1991). Representational harms, as West et al. note, often go unnoticed because they occur within back-end systems, hidden behind trade secrecy, inaccessible to those affected (West et al., 2019).

In contrast, this paper proposes that AI systems be designed for contestability. Users and stakeholders must be able to interrogate and challenge the logics behind automated decisions. This requires transparency not only in algorithmic code, but in the ontological assumptions — how classes, categories, and values are defined.

Why Ontology

In contrast to the largely statistical and opaque nature of modern machine learning systems—including large language models (LLMs) and neural networks—formal ontologies offer a symbolic, structured, and transparent approach to meaning. Neural networks are trained on vast datasets and derive patterns through optimization, resulting in powerful but inscrutable models. While their graph-based architectures (as neural networks are often represented, and graph neural networks appear to be structured) may appear structurally similar to ontologies, they differ in one critical regard: ontologies are semantically explicit and interpretable, while neural models are implicitly encoded and often opaque. Formal ontologies define entities, relationships, and constraints with logical rigor, offering not only transparency but also a stable ground for ethical reasoning and bias detection. This structure enables three powerful affordances in the design of artificial intelligence systems: (1) the ability to make systems transparent and their logic traceable, (2) a framework to reverse engineer and expose hidden biases in learned or coded assumptions, and (3) a platform that transcends syntactic language and engages with semantic meaning, allowing more consistent human-agent communication. Unlike tuning deep models in high-dimensional vector spaces, ontology-based design opens up the possibility for cross-disciplinary, communicative reasoning that is both inspectable and evolvable.

Ontologies are not neutral. As Guarino notes, they define what counts as knowable, shaping both technical structures and social meaning (Guarino, 1998). They are formal commitments to a shared conceptualization of a domain, and thus have political and epistemic weight. Ontologies can be implemented in AI systems, knowledge graphs, or policy-aligned metadata, enabling systems to “know” and communicate what kind of fairness or justice is being applied. This allows for critical interrogation, contestation, and adaptation as social contexts shift.

“In the history of Western philosophy, there have existed multiple, competing definitions and ontologies of truth: correspondent, idealist, pragmatist, and coherentist, and consensual notions.” (Alcoff, 1991) Using ontology in the context of AI gives us the opportunity to be aware of the definition of truth in our technology, and update it as appropriate.

Background on Ontology

Ontologies are more than abstract philosophical tools—they are infrastructural. As Guarino argues, the construction of an ontology involves both conceptual clarity and interdisciplinary methodology, drawing from philosophy, linguistics, and computer science (Guarino, 1998). In the philosophical sense, an ontology describes a system of categories that represents a particular vision of the world. In AI, ontologies provide formal vocabularies and structured relationships that enable consistent interpretation of terms by humans and machines alike.

Ontologies can describe hierarchies, constraints, and conceptual dependencies, and their sophistication can grow through axiomatic richness or domain-specific detail. They are language-dependent implementations of deeper, language-independent conceptualizations. By encoding what is assumed to be true by a community, ontologies serve as shared grounds for reasoning, data exchange, and semantic interoperability. As such, they play a critical role in information system development and re-engineering, helping to make the implicit explicit—especially in systems where procedural code has historically hidden the ethical assumptions under layers of abstraction.

A strength of ontologies lies in their capacity to reverse engineer and interrogate otherwise unknowable systems. As Guarino notes, every model carries embedded conceptual assumptions; reconstructing those assumptions helps reveal the values and frameworks encoded in AI logic in a human readable and engageable format. “The user can browse the ontology in order to understand the vocabulary used by the Information System, being able to therefore formulate queries at the desired level of specificity.” (Guarino, 1998) This is systemically different than systems where users have no insight into their function. Ontologies also allow for ontological reuse and adaptation across systems, making it possible to preserve ethical commitments even as infrastructures evolve. When embedded early, they reduce the cost of ethical modeling while increasing semantic consistency and transparency.

Ontologies transcend linguistic specificity. They can align disparate vocabularies while preserving a shared conceptual core, enabling users from different contexts and disciplines to understand and engage with AI systems. As Alcoff explains, meaning must be understood as plural and shifting—and ontologies can accommodate that plurality through disambiguation and mapping (Alcoff, 1991). By allowing users to interface in their own natural language and query at varying levels of specificity, ontologies democratize access to knowledge systems.

Their utility spans multiple stages of AI development: from early requirement analysis and system modeling, to runtime interpretability and user interface navigation. They can support the development of ethical metadata schemas, bridge heterogeneous data platforms, and allow users to query systems using their own language. Most importantly, they offer a technical mechanism for flattening power dynamics: by making the system’s internal logic visible, participatory, and adaptable.

“An important reason for using an ontology at run time is enabling the communication between software agents… expressions formulated in terms of an ontology… [require] access to the ontology they commit to.” (Guarino, 1998) In AI, this would allow explicit ethical frameworks to scale and interact with different users. It would do so in a method where “a user is free to adopt his own natural language terms, which are mapped (after a possible disambiguation step) to the Information System vocabulary with the help of the ontology.” (Guarino, 1998) By enabling this kind of interaction—where meaning is both structured and responsive—ontologies support not only technical interoperability but also participatory interpretation, allowing users to shape and contest the systems that shape their lives.

These functions position ontology not just as a modeling tool, but as a civic one: a technology for communicating and negotiating meaning in pluralistic societies. When integrated into AI development cycles, ontologies help ensure that the systems we build are not only technically functional, but also epistemically and ethically accountable.

Value-Aware Ontology Schema

Below is a first prototype, extremely simple, to demonstrate the premise of how this might begin to be implemented. It is a directed graph diagram explaining the differences between ontology (purple), GNN (pink), and knowledge graph (peach).

Code
import matplotlib.pyplot as plt
import networkx as nx
from matplotlib.patches import Patch

# Define the graph
G = nx.DiGraph()

# Ontology section (purple family)
G.add_edges_from([
    ("Concept:\nPerson", "Property:\nworksFor"),
    ("Property:\nworksFor", "Concept:\nOrganization"),
    ("Concept:\nPerson", "Property:\nhasName"),
    ("Property:\nhasName", "Concept:\nString")
])

# GNN section (rose/coral family)
G.add_edges_from([
    ("Node A", "Node B"),
    ("Node B", "Node C"),
    ("Node C", "Node A")
])

# Knowledge Graph section (peach/coral family)
G.add_edges_from([
    ("Alice", "worksFor"),
    ("worksFor", "OpenAI"),
    ("Alice", "hasName"),
    ("hasName", "Alice Smith")
])

# Set custom positions for clarity
pos = {
    # Ontology
    "Concept:\nPerson": (-4, 2),
    "Property:\nworksFor": (-3, 1),
    "Concept:\nOrganization": (-2, 2),
    "Property:\nhasName": (-3, 0),
    "Concept:\nString": (-2, -1),
    # GNN
    "Node A": (0, 2),
    "Node B": (1, 1),
    "Node C": (0.5, 0),
    # Knowledge Graph
    "Alice": (3, 2),
    "worksFor": (4, 1),
    "OpenAI": (5, 2),
    "hasName": (4, 0),
    "Alice Smith": (5, -1),
}

# Assign consistent colors per category
ontology_color = "#724A73"
gnn_color = "#E07F7E"
kg_color = "#EDA575"

# Node-specific color map
color_map = {
    # Ontology
    "Concept:\nPerson": ontology_color,
    "Property:\nworksFor": ontology_color,
    "Concept:\nOrganization": ontology_color,
    "Property:\nhasName": ontology_color,
    "Concept:\nString": ontology_color,
    # GNN
    "Node A": gnn_color,
    "Node B": gnn_color,
    "Node C": gnn_color,
    # Knowledge Graph
    "Alice": kg_color,
    "worksFor": kg_color,
    "OpenAI": kg_color,
    "hasName": kg_color,
    "Alice Smith": kg_color
}

# Extract node colors in order
node_colors = [color_map[node] for node in G.nodes]

# Draw the graph
plt.figure(figsize=(12, 7))
nx.draw(
    G,
    pos,
    with_labels=True,
    node_color=node_colors,
    node_size=4000,
    font_size=9,
    font_weight='bold',
    edge_color='gray'
)

# Create legend manually
legend_elements = [
    Patch(facecolor=ontology_color, edgecolor='gray', label='Ontology Nodes'),
    Patch(facecolor=gnn_color, edgecolor='gray', label='GNN Nodes'),
    Patch(facecolor=kg_color, edgecolor='gray', label='Knowledge Graph Nodes')
]
plt.legend(handles=legend_elements, loc='lower center', bbox_to_anchor=(0.5, -0.1), ncol=3, frameon=False)

plt.title("Comparison of Ontology, GNN, and Knowledge Graph Structures", fontsize=14)
plt.axis('off')
plt.tight_layout()
plt.show()
/var/folders/0t/txmdz9bs23b528hfd3hh8grh0000gn/T/ipykernel_64572/165368535.py:102: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
  plt.tight_layout()

The table below contrasts the benefits and trade-offs of ontological design compared to neural networks and knowledge graphs. While neural networks offer flexibility and predictive power, they often do so at the cost of interpretability and auditability. Knowledge graphs provide structural grounding but may lack normative clarity. Ontologies offer a uniquely transparent and participatory approach to ethical AI design.

Characteristic Formal Ontology Neural Networks (e.g., GNNs, LLMs) Knowledge Graphs
Core Approach Logic-based, symbolic reasoning Data-driven, statistical pattern recognition Hybrid of symbolic structure and data
Transparency High – human-readable and auditable Low – internal logic often not explainable Medium – structure is transparent, use may not be
Communication Method Shared vocabulary of defined concepts Learned representations with no shared semantics Can support both structured queries and learning
Runtime Use Used at runtime for inference and checks Mostly during training or prediction Used for reasoning, querying, or ML tasks
Bias Detectability High – explicit relationships and assumptions Low – bias is often hidden in weights Depends on design and modeling choices
Meaning Representation Semantics are encoded explicitly Meaning is implicit in vectors Semantics may be partially encoded and interpretable
Ethical Alignment Built-in structures support normative reasoning Emergent patterns lack explicit ethics Often limited to factual or relational data
User Participation Supports interface-level querying and customization Minimal to none – users interact indirectly Varies depending on implementation
Adaptability Easily extensible through subclassing and axioms Requires retraining for changes Moderate – changes depend on schema control

Fairness Schema in Practice

Fairness is often treated in AI systems as a single, abstract goal—a checkbox or a floating principle. But fairness, like justice itself, is multidimensional and contested. To design systems that understand and reflect the complexity of fairness, I propose a value-aware ontology schema that is a first iteration of how to define this in systems. In this schema, fairness is not a singular node but a superclass with structured, definable subclasses.

We draw on philosophical and legal traditions to define three core types of fairness:

  • Distributive Fairness: Are outcomes (e.g., resources, opportunities) distributed equitably across groups?
  • Procedural Fairness: Are the processes that generate decisions transparent, consistent, and inclusive?
  • Relational Fairness: Do systems recognize human dignity, minimize harm, and promote respectful interaction?

The value-aware ontology schema described earlier is exemplified in the fairness model. Rather than treating fairness as a single abstract concept, it is defined as a superclass composed of structured, evaluable subclasses: distributive, procedural, and relational fairness. These categories not only reflect philosophical and legal precedents, but also provide entry points for measurable criteria and system alignment. Each subclass includes concrete examples and associated metrics, supporting more precise and transparent evaluations of ethical performance in AI systems. This formalization is what enables the system to be queryable and contestable.

This structure can evolve with input from affected communities. In future iterations, subclasses may expand to reflect cultural, legal, or emerging definitions of fairness — ensuring the ontology remains living, not fixed.

As artificial intelligence systems become increasingly embedded in social, economic, and political decision-making, the need for interpretability, accountability, and shared ethical grounding grows more urgent. Formal ontologies offer a method to bridge technical infrastructure with moral imagination: they define concepts like fairness in structured, computable ways while maintaining ties to human values and outcomes. In doing so, they make space for interdisciplinary collaboration—enabling ethicists, policymakers, and engineers to co-develop systems grounded in shared meaning. The example of fairness demonstrates how complex ethical ideas can be decomposed into subcategories, examples, and measurable outcomes within a transparent structure. These representations not only clarify what values are embedded into our systems, but also allow us to surface where bias, harm, or exclusion may reside. As new AI tools accelerate pattern recognition and optimization, ontological frameworks provide the semantic scaffolding needed to ensure these tools serve justice rather than distort it. When integrated thoughtfully, they hold the potential to democratize design and serve as powerful equalizers in the pursuit of ethical technology.

Code
import matplotlib.pyplot as plt
import networkx as nx
from matplotlib.patches import Patch

# Create the graph
G = nx.DiGraph()

# Root node
G.add_node("Fairness")

# Subcategories
subcategories = ["Distributive", "Procedural", "Relational"]
for sub in subcategories:
    G.add_edge("Fairness", sub)

# Define examples and link them to subcategories
example_map = {
    "Distributive": {
        "examples": {
            "resource\nallocation": ["statistical\nparity"],
            "equal access\nto housing": ["equal\nopportunity"]
        }
    },
    "Procedural": {
        "examples": {
            "due\nprocess": ["consistency"],
            "algorithmic\ntransparency": ["explainability"]
        }
    },
    "Relational": {
        "examples": {
            "non-\ndiscrimination": ["user\nperception"],
            "respectful\nlanguage": ["harm\nreporting"]
        }
    }
}

# Add examples and metrics, connecting metrics to their specific examples
for category, data in example_map.items():
    for ex_label, metric_list in data["examples"].items():
        example_node = f"Example:\n{ex_label}"
        G.add_edge(category, example_node)
        for metric in metric_list:
            metric_node = f"Metric:\n{metric}"
            G.add_edge(example_node, metric_node)

# Positioning nodes
pos = {
    "Fairness": (0, 3),
    "Distributive": (-3, 2),
    "Procedural": (0, 2),
    "Relational": (3, 2),

    # Examples
    "Example:\nresource\nallocation": (-4, 1),
    "Example:\nequal access\nto housing": (-2, 1),
    "Example:\ndue\nprocess": (-1, 1),
    "Example:\nalgorithmic\ntransparency": (1, 1),
    "Example:\nnon-\ndiscrimination": (2, 1),
    "Example:\nrespectful\nlanguage": (4, 1),

    # Metrics
    "Metric:\nstatistical\nparity": (-4, 0),
    "Metric:\nequal\nopportunity": (-2, 0),
    "Metric:\nconsistency": (-1, 0),
    "Metric:\nexplainability": (1, 0),
    "Metric:\nuser\nperception": (2, 0),
    "Metric:\nharm\nreporting": (4, 0),
}

# Color scheme
ontology_color = "#724A73"   # Deep plum
example_color = "#A48CBC"    # Soft violet
metric_color = "#CF80A0"     # Muted rose

# Assign node colors
node_colors = []
for node in G.nodes:
    if node == "Fairness" or node in subcategories:
        node_colors.append(ontology_color)
    elif node.startswith("Example:"):
        node_colors.append(example_color)
    elif node.startswith("Metric:"):
        node_colors.append(metric_color)
    else:
        node_colors.append("#CCCCCC")

# Draw the graph
plt.figure(figsize=(14, 8))
nx.draw(
    G,
    pos,
    with_labels=True,
    node_color=node_colors,
    node_size=3500,
    font_size=8.5,
    font_weight='bold',
    edge_color='gray'
)

# Add legend
legend_elements = [
    Patch(facecolor=ontology_color, edgecolor='gray', label='Ontology Classes'),
    Patch(facecolor=example_color, edgecolor='gray', label='Examples'),
    Patch(facecolor=metric_color, edgecolor='gray', label='Metrics')
]
plt.legend(handles=legend_elements, loc='lower center', bbox_to_anchor=(0.5, -0.08), ncol=3, frameon=False)

plt.title("Ontology of Fairness (Example-Specific Metrics)", fontsize=14)
plt.axis('off')
plt.tight_layout()
plt.show()
/var/folders/0t/txmdz9bs23b528hfd3hh8grh0000gn/T/ipykernel_64572/3123053944.py:111: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
  plt.tight_layout()

Ethical Versioning and Policy Feedback Loops

The values embedded into AI systems are rarely static. As cultural norms, legal landscapes, and ethical insights evolve, so too must the architectures that reflect them. To support this dynamism, I propose an ethical versioning system—a structured process for updating the ethical assumptions and value definitions within AI systems over time.

Rather than treating ethics as a one-time design decision, versioning treats it as a series of revisions. Each new version of an AI model (or its decision logic) includes metadata about the ethical standards it adheres to, and how those standards have changed. This allows for reflection and accountability, especially as AI becomes increasingly involved in decisions that affect human rights, well-being, and opportunity.

Key components of this system include:

  • Value definition logs: A changelog of how key terms (e.g. fairness, harm, consent) are defined and refined.
  • Version control for ethical assumptions: Timestamped updates that track shifts in ethical stances, stakeholder feedback, or policy mandates.
  • Policy integration hooks: Alignment points where legal or social policy updates can trigger schema revisions or re-auditing.

These mechanisms enable transparency and adaptability in ethical reasoning. They also help to formalize public participation by making the underlying definitions legible and changeable. Ontologies allow us to operationalize this process—not only defining what fairness means in a system, but allowing that meaning to evolve collaboratively and be documented over time.

As Alcoff reminds us, “there is no neutral place to stand free and clear in which one’s words do not prescriptively affect or mediate the experience of others” (Alcoff, 1991). Ontology, in this sense, is not simply a formal tool but a moral one. It allows us to ask who gets to define the terms of our technologies—and to make that process visible, participatory, and contestable. Instead of enforcing boundaries unilaterally, it opens the boundaries to negotiation.

Participatory Ethical Iteration

Ontologies are not static blueprints; they are dynamic structures designed to evolve with their communities. Participatory ethical iteration refers to the process of revisiting, revising, and refining the conceptual frameworks that underlie AI systems in collaboration with the people they affect. Because ontologies are structured yet interpretable, they provide an ideal foundation for this process. Stakeholders can engage directly with the vocabulary, categories, and assumptions encoded in a system, and propose changes that reflect shifting norms, emerging harms, or overlooked needs. When deployed in systems that support querying, feedback loops, and co-creation, ontologies enable ethical evolution—not just ethical compliance. This iterative method makes values legible and changeable, strengthening legitimacy and long-term alignment in AI design.

Code
# Example: Ethical versioning metadata (conceptual structure)
ethical_versions = [
{
"version": "1.0",
"date": "2023-11-01",
"fairness_schema": "Distributive only",
"stakeholder_input": ["engineers", "legal"],
"notes": "Initial release with basic parity metrics"
},
{
"version": "2.0",
"date": "2024-04-12",
"fairness_schema": "Added procedural fairness layer",
"stakeholder_input": ["community advisory board", "policy analysts"],
"notes": "Updated after public audit revealed process transparency concerns"
}
]

Conclusion and Future Work

This paper offers a prototype architecture for ethically aligned AI: a value-aware ontology embedded in systems designed to evolve, to listen, and to be challenged. By breaking from static ethical checklists and monolithic fairness metrics, we move toward systems that reflect the dynamism and plurality of real-world justice.

The work ahead is significant. From building participatory platforms to crafting governance policies and operational toolkits, the task of realizing this vision is collaborative. But by starting with an architectural commitment to justice—by building one principled room at a time—we can shape technological futures that are more accountable, equitable, and human.

Ontological design does not promise to solve every problem. As West reminds us, “some problems… cannot be fixed by a technical solution at all” (West et al., 2019). Yet, by embracing structure, visibility, and contestability as core design principles, we create space for deeper moral reflection and distributed decision-making. Ontology allows us to make the scaffolding of AI systems visible and revisable—to move from hidden assumptions to shared definitions, and from ethical opacity to democratic design.

In Alcoff’s words, “we are collectively caught in an intricate, delicate web in which each action I take, discursive or otherwise, pulls on, breaks, off, or maintains the tension in many strands of a web in which others find themselves moving also” (Alcoff, 1991). This vision of entangled responsibility should guide our design practice. What we define, model, and build in AI systems reverberates across that web. The challenge is not merely technical. It is ethical, collective, and ongoing.

References

Alcoff, L. (1991). The problem of speaking for others. Cultural Critique, 20, 5–32. https://doi.org/10.2307/1354221
D’Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press. https://datafeminism.mitpress.mit.edu
Guarino, N. (1998). Formal ontology and information systems. Proceedings of the 1st International Conference on Formal Ontology in Information Systems (FOIS), 3–15.
West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI. AI Now Institute. https://ainowinstitute.org/discriminatingsystems.html

Acknowledgments

The author gratefully acknowledges the use of OpenAI’s ChatGPT as a tool for refining conceptual frameworks, generating early drafts, receiving feedback on code structure, and sourcing reading suggestions relevant to AI ethics, ontology, and participatory design. While all interpretations and final arguments are the author’s own, this tool played a supportive role in the iterative development of the ideas presented here.

All written content © Liz Kovalchuk. This website’s essays and writing are shared under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).

Any code snippets or tools provided are licensed under the MIT License unless otherwise stated.