An AI that minimizes trust asymmetries on its objective function can use well-known human biases (tendency to follow rules of thumb-credulity, cognitive dissonance-double down on beliefs, human inclination to spread pleasant-sounding lies) to asses provenance of the data and context, in applications related to detection and mitigation of fake news and deep fakes. The embedded Oracle can operate as a standalone feed, or as a complement to specialized technologies that take advantage of blockchain features for this purpose (no single arbiter of truth, public record). In this application, the genetic programming provides an audit trail on its genome (tree structures), and, an event-based notification function (when a breakdown of trust symmetry occurs).
Part of the tasks of the AI will deal with inferring intent: for instance, an analysis of the Bernie Sanders and Gary Johnson campaigns shows asymmetries at higher scales: the demand for information and actual voter commitment do not correlate for the top contributors (attention price). But It also pertains to crypto native applications: the bitcoin donation adoption was found to be more prevalent among libertarians, a group ideologically aligned to decentralization of monetary policy.
Conclusions
The characteristics of blockchain-based AIs such as being incentivized natively through the use of tokens of value, and not having a single point of failure, are attractive propositions, but also mean that decentralized intelligence will be hard to kill if something goes wrong. And, depending on the stage of its development, such a decision could meet ethic questioning. It is therefore imperative that those intelligent agents go beyond the basic expectation (do not do harm to humans, do the job, do not lose money) to actually solve the vulnerability issues of humans systems (security) while easing human anxieties by providing transparency (in the words of Manuela Veloso, verifiable answers and consistency of answer) and operating under a set of beliefs ("mental" models) that are compatible with the human experience. To do this, the AI needs to have an adequate degree of trustworthiness on its own assessment, and trustable symbolic regression provides a means to that -- in the same way that humans augment their intelligence with AIs, AIs can augment their intelligence with a time-variant model of the environment created from off-chain signals. This implies both operating with a reasonable degree of intuition about cause and effect, and with the ability to deal with edge cases (even in the case of narrow, purpose-specific AI).
If we take a lesson from history, making a new weapon so terrifying that it will be inconceivable to use (e.g. Leonardo's battlefield tanks, von Neumann's mutual assured destruction nuclear doctrine), is itself a form of deterrence. If blockchain is truly irreversible social computing \cite{rao}, and being trust hard to earn and rebuilt, memory itself could be a useful deterrent for misbehavior and carelessness, for humans and machines alike. But it also means that where there is a trust imbalance (usually in the periphery of the blockchain, in the coupling with the off-chain systems that support it) there are opportunities for either trust disintermediation or arbitrage, and possibly, value creation. Moving forward, this combination of awareness of irreversibility, value by memory, and reasoning about introspection, perhaps implemented using non-ergodic variants of cultural genetic algorithms \cite{Reynolds_2011}, could allow machines to navigate the world using the same fundamental device that evolution has provided to humans: trust. And ultimately, the question is not if we should trust AIs, but rather how AIs will trust us.