AI has problems: Together we hold the solutions [3/3]

AI has problems: Together we hold the solutions [3/3]

This is a three-part article about problems AI is introducing and how to fix them with blockchain’s distributed ledgers.

Here’s the short of it:

Part 1) To reduce loss of jobs, we can:
• Make AI more accessible (rather than proprietary)
• License our knowledge (rather than sell our heartbeats).
• Provide tools for learning new skills (not automate learning)

Part 2) To reduce loss of control, accountability and power imbalance:
• Provide open source tools we can all use (not consolidate in few companies)
• Give
bots license plates (not inhumane machines with no reputation)

Part 3) To regain privacy, self-governance, and democratise AI, we can:
• Own our own data (not give it to others to publish).
• Build secure Conversational UIs (not lose trust in others)

  1. AI will devour jobs. AI will spawn jobs. We must be judicious in designing emergent economies. We can trace the origins of the calculator and computer (both deskjobs, once upon a time) as automated systems that devoured jobs and spawned others. 43% of jobs will be impacted by AI in the coming 5 years and this is reflected by the fear of some 43% of Americans worry about this. And rightly so. But wrongly so, too.
  2. AI is impacting the decisions we make. Not only the jobs we have, but the roads we drive, the politicians we elect, the purchases we make, the news we read, and the very thoughts and decisions related to healthcare, education, financial decisions and others. AI is now impacting our ability to think and maintain control of our lives.

The solutions to these problems are being built. We, humans, are building them.

Blockchain is now being applied to AI projects such as Seed Vault, Ocean Protocol, BotChain, and others to offer solutions. (part 2/3 is here)

Trending AI Articles:

1. How Neural Networks Work

2. ResNets, HighwayNets, and DenseNets, Oh My!

3. Machine Learning for Dummies

PART THREE: We need to own our own data.

As the next billion people come online, a massive potential market, how will they have access to the benefits of the 4th industrial revolution? How will people in sub-Saharan Africa, for example, interface with AI? How will they be able to amplify their knowledge, skills, earnings and savings? How will AI help them manage their health? How will AI help governments take advantage of the challenges that developing countries face? What will the bots that present these systems say to them, in what language, with what coercian, under what circumstances that most influence?

AI is only as valuable as the data it’s based on — and that data comes from us, people.

Isn’t it strange that the companies that are known for building their business models on end-user data are the ones with the most powerful AI systems? By giving away our privacy we give away our data, and that feeds AI which, in turn, is able to collect more of our personal information.

The foundation of AI comes from the data we, as users, feed those systems.

Since these automated systems need information, the people able to take advantage of the system will be providing their information to the system by webpages or whatever. Some of these information collection methods will represent the end-user interest better than others.

Facebook’s African user base has grown to over 170 million, and over 90% of them access the social network using mobile devices. Seven out of ten internet users in Africa now log onto Facebook daily — it is their gateway to the Internet. Therefore these large datasets that feed these AI systems have encouraged Facebook’s efforts to amplify their AI methods whether by finding meaning in your posts, following trends, or by targeting ads. These are currently the primary means by which people in sub-Saharan Africa can interface with AI. Amplification of knowledge by search, interfacing with family via social media, allocation of trending data by posts. But we do not, despite decades of social media, see equitable compensation of information for social well-being, improved healthcare conditions, education, or long-term regional stability. Facebook’s worldwide ARPU (average revenue per user) rose to $6.18 in 4Q17. This number is two dollars more (according to the World Bank national accounts) than the gross national income per day, per capita, of the average Sub-Saharan African: $4.15.

Our privacy is sold via publicity.

Companies now publish our private data via advertising revenue models. The value of personal data is as measurably valuable as the value of the company. Revenues and business models are not bad things as they provide alignment of values (Google’s own conceit was that Search and Advertising were two sides of the same coin) but as personal information has become centralized with only a handful of companies there are knowledgeable individuals, people that have been involved in social media since the mid-90s, that accuse these companies of “surveillance capitalism.” This indicates a misalignment of personal and professional prerogatives. This also indicates a conflict between privacy and publicity.

But the imbalance of privacy / publicity and the conflation of personal / professional prerogatives indicate that the next billion people coming on line may provide more value than they reap. We may, were we in a grumpy mood, call these systems contemporary feudalism.

The Privacy Problem.

Privacy is like an onion ring. Our identity blends outwards, overlaps with others, private mixing into public as the outer edges of your information and control blend with others where you socially interact.

Privacy is not a binary thing. What is more private is more valuable, more of who we are, and at at the center of our lives, our homes, our actions and thoughts.

Think of your house, as an example of this onion ring of privacy. Out front, let’s imagine, there is a residential, suburban street. People drive all the time on that street and you don’t think of it as yours. Your sidewalk — that is, the sidewalk that is in front of your yard — you sweep and people walk there and you may or may not see or talk with them. Over your yard they can see your house and you are used to the neighbor’s kids walking through your yard. Your front porch, however, is a bit more private and if someone comes to the door you allow them in with permission. It is common for someone you’ve never met to stand in your foyer, or even, if I had sent you, dear reader, a note I were coming to visit perhaps you would invite me to sit on your sofa for a few minutes. Friends and family may come into the kitchen and perhaps I might as well, should you offer me a drink but if you were to find me in your bed we would have a problem. Your bed is the center of your home, like your heart, and just as we keep the most valuable things we own in our bedroom (not in our yard) this allows us control over the things most precious. You wouldn’t be too surprised if you saw me hanging around in the street in front of your house, but you’d be surprised if you were to, this evening on returning from work, find me in your bed.

The most private stuff is at the center of the onion ring and the outer edges of our public data. If this is reversed then bad things happen; you lose what is valuable and therefor you give your power to make decisions to another group of people. An imbalance of privacy / publicity generates an imbalance of power. Emerging models of AI do not currently respect the various levels of privacy and security we each need to live our lives as we choose. Especially if we’re trying to build a global community in which there are vastly different values at risk.

As Emmanuel Macron, France’s president, put it in a Wired interview, from March 31, 2018:

“When you look at artificial intelligence today, the two leaders are the US and China. In the US, it is entirely driven by the private sector, large corporations, and some startups dealing with them. All the choices they will make are private choices that deal with collective values. [. . .] If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI.”


Blockchain allows us to own our own data. There are means by which we can retain, in a radically public (non-private) manner, who contributes what piece of data. Blockchain allows micropayment in which a fraction of a cent may be paid to someone that contributed a piece of data that another person paid a fraction of a cent to use. This means that as AI libraries increase in size we can track who has contributed what data, what accountability, and compensate those people for that data and even responsibility.

The model of the SEED Token allows just this. All CUIs (bots) are composed of developed assets. The more complex or sophisticated the CUI, the more complex the number and type of components and services. Developers needs to retain ownership of their contributed asset. Often, assets are blended and aggregated such that they may be developed by many people. When one of those aggregated assets are used, multiple micropayments then scatter to each of the developers that contributed, according to their licenses. This means that the ledger must track co-data that indicates not only the source, requesting system and destination of that asset, but also which developer gets compensated what amount for that request. The structure of the smart contract becomes most unique when remixing occurs. Like DNA, a tiny change may result in great divergence, so the token must identify unique differences. The Seed token contains four elements: (1) identity, (2) asset locations, (3) balance, and (4) licensing information which declares what the function or dataset is intended to do, separate from its control flow. By way of example, the Ethereum app CryptoKitties, that allows users to buy, collect, sell and “breed” digital pets is a conceptually similar project. CryptoKitties allows merged assets that would normally exist in a walled garden to be properly owned by their many creators. The Seed Token provides the same provenance, copyrights, and compensation for aggregated assets, but specifically for multi-modal, conversational systems. Lastly, authentication of a bot and the aggregated system, at an arbitrary level of the asset tree, may be therefor authenticated. In summary, the token is designed for control over the amount of currency issued, when it is issued, payments for transactions, incentivization for curated data and overall reduction of economic volatility.


Another way to solve the loss of privacy isn’t so much ownership and compensation as secure interfaces, at least in so far as any conversation can be made private. This is made up of two parts: Authentication of the bot is one part of the puzzle, but end-to-end encryption is another. This means that the conversant can know that there is a limited amount of security up to the deployer responsible for the bot, useful for anyone sharing sensitive data.

Consider someone that is sharing financial or healthcare data with a bot. They need to know that there is no eavesdropping or man-in-the-middle attacks and they need to know that the entity on the other end is who it says it is. This, of course, like any conversation, has a limit in that once the information is on another person’s screen it is no longer private. But the line can be secured and the bot authenticated.

Botanic was asked to deploy such a system. Doing so meant using an authenticated bot on the fully open-source messaging platform Signal. Published by Open Whisper Systems the Signal messaging app provides complete end-to-end encryption. As open source it is a fully secure communications line and, with the bot authenticated on the other end, user data is shared only with the owner of the bot. This meant that, as with a person, the line was secure, but there is still the need to trust the person responsible for the deployment of the bot.

GDPR, the Consent Act and other regulations are descending and will generate great inconvenience for proprietary and closed AI systems. These systems need to be publicly transparent to demonstrate end-user privacy.

In the end, it’s up to us to respect ourselves. In the end, we are the AI.

We are accountable to ourselves. And this extends to many ecosystems, not just AI and bots, but we are seeing entire economies converge. For more see Lawrence Lundy’s article, on The End of Scale: Blockchains, Community, & Crypto Governance.


Bots provide an interface to AI, and in controlling and designing that interface and the backend curated data we are able to control and design AI so that it represents the end-users’ values. This is not an easy task and it requires multiple large-scale systems to open source the tools, provide the blockchains, and manage the networks. Otherwise we have a lot more problems with AI than solutions.

We can do it. And we need your help. Now is time to own the future of AI.

Please join us at and Seed Vault, at Botanic Technologies, Ocean Protocol, at BotChain, and other projects that are democratizing AI.

— — — — — — — — — — —

Thanks: Massive thanks to the formidable Michael Tjalve, of Microsoft’s Bots for Good for his input, critical thinking, informative examples, and philosophic temper. Thanks also to Lawrence Lundy of Outlier Ventures, and Ben Koppleman of SEED for their reviews, additions, and input.

AI has problems: Together we hold the solutions [3/3] was originally published in Chatbots Life on Medium, where people are continuing the conversation by highlighting and responding to this story.