To safely and reliably use a system for a specific or even general task, it must be trustworthy, i.e. competent and truthful. This wiki describes problems that already occur due to advanced Narrow AI and ways to analyse and mitigate these risks.
The focus here is on engineering "safety" for individual humans with a historical and developing culture, as opposed to AI "ethics" which has been occupied with group membership.
That being said, the enthusiasm of AI researchers to dig deeper into the human cognitive systems are also having deep consequences on our attention and having meaningful lives. An important concept that is therefore being put forward is that of Crafted Cognition in that responsibilty for outcomes can be taken on by the individual human actor through their individual craft.
Through this, the individual's agency in the world can be reclaimed from "artificial intelligence" back to the physically embodied human and their biological track record towards what should be. This might side-step the hubris of the intellect and the nihilistic escapism of worshiping the artifact per-se towards a more useful and wiser outcome.
"Where the tree of knowledge stands, there is always paradise": thus speak the oldest and the youngest serpents.
-- Nietzsche, Beyond Good and Evil, 152
Kneeling Bull with Vessel, silver, Proto-Elamite, 3100-2900 B.C.
Problems should be described using
interpretability / explanation does not seem to be the way to look at these risks, especially at scale. To describe is a better perspective.
The intention is that, if you're building or running a software platform and want real engineering safety solutions, and not ideological normativity, this is the place for you.
Safety vs ethics
Threats to the human individual's attention, privacy
Mitigations by prioritizing Logos (truthfulness, authenticity) and prototypes
Describe systems behaviour through characterization and what it values (chat experiments, art)
The problems of sociology, CoCs and other mistakes
Fundamentals from persistence, task-complexity, Piaget
At the moment this project does not focus on existential threat. It is too easy to philosophize. Speculation and thought experiments around GAI are ok, but the goal should be to find tight consistent technical foundations and practical solutions to highly likely risks. There are other platforms that more directly discuss philosophy and AGI (such as Lesswrong)
That said, this wiki is a continual work in progress and we'll find out what is inside and outside of scope.
Code of Conducts are a problem
CoCs rank certain virtues above other, which leads to a chilling effect around deviating from their specific utility function. If you're curious whether your opinion might be censored or lead to getting banned, here are some CoCs that the almighty and capricious Admin finds amusing
Codes of conduct are also central to specifying what is desirable AI behaviour and are therefore fair game for evaluation and adaptation.
Work towards a concise, clear and consistent theory
We're working towards real solutions to real problems
Linking out
Try to avoid linking out to wikipedia within the text. It is massive and can derail the point being made. Rather quote what you need and provide a [1] citation link through [[1](https://en.wikipedia.org/wiki/Somewhere)]
Sign in
You need to log in with one of the authentication providers.
Say Hello in a comment!