Skip to content

Latest commit

 

History

History
264 lines (200 loc) · 19.6 KB

File metadata and controls

264 lines (200 loc) · 19.6 KB

Village Link

Villages are self-governing.

github-banner www.village.link (This URL will become the front page. Currently it points straight back here.)

Abstract

The central thesis of this project is that the village is a computational machine. The village performs the fundamental requirement of a computer: That, instead of being a machine to perform one specific procedure, it is a machine to perform any procedure. The programs that run on this machine are sets of norms - culture. If you consider one group of humans living in the Arctic tundra, and another in the Australian desert, both are using the same type of computer, but running different programs. The flexibilty and power of this computer is the basis of our dominance of the planet.

Evolutionary pressure has designed the village as a general-purpose machine that can:

  1. Liberate energy from any environment
  2. Distribute energy among the villagers
  3. Defend the energy store against raiders, internal or external.

These deeper governance structures still operate when the wider environment has become lawless. They have evolved for collective defence against a jungle that is unknowably vast and dangerous.

The tradition of authorship about governance structures dates back at least to Plato's Republic. This project abstracts away from that tradition by setting to one side a discussion about {the right set of rules,} and instead focusing on foundational architecture to support {any set of rules}. The project does not involve itself in debates about what is right or just - leaving those questions as something for villages to deal with. However, the project does take a position that the structures that evolved to face the dangers of the jungle have equipped us with some tools for facing the dangers of technology.

Village Link

Next-best safety when the rules break down

There are places in the text below that use metaphors of anarchy among teenagers or anarchy among nations. The metaphors are used as a frame for thinking about anarchy with bots.

It is possible that we will create effective engineering controls for our bots, but there are good reasons to believe that we will not. And it is possible that we will create effective legislative controls for our bots, but again, there are good reasons to believe that we will not. This project argues that the deeper substrate of rule-making is social rather than technical or political. All actors, human or otherwise, face a set of constraints based on access to energy. While-ever the extraction of energy is a collaborative affair, the collaborators will strive to make rules about distribution and theft. This creates feedback loops back and forth between sets of village norms and individual prestige. In a village, prestige is currency. Reputation is core.

This project aims to create a standard that makes it possible to weld together pieces of reputation graph that are currently scattered in many places. The work in the project comes from an argument by analogy with Tim Berners-Lee and the development of the web:

Before the web, the internet was a bunch of islands of information - each very interesting in its own right, but ultimately much richer once it had a connective tissue.

Our many public and private social interactions create islands of reputation graph that are scattered across the information space. We don't have a standard way to connect those islands. This project is designed to create that standard.

Relationships

Network analysis is based on sets of points - nodes - used to represent actors, and sets of lines - edges - used to represent relationships. These two objects form the base level of many systems. This is not the approach taken by this project.

Consider a group of 20 teenagers. Within the group, Sophie and Otto are quite high status. Sophie has a private assessment of the Sophie-Otto relationship, and so does Otto. The assessments are like weights, and each could be represented as a number between minus and plus one. The other 18 members of the group also make private assessments of the Sophie-Otto relationship. The group discusses relationships constantly. Alliances form and split. All members of the group make public claims about relationships. These claims are often different from their private assessments. They also strategically change their public claims for different audiences. Everything changes over time.

Now, instead of a single line, the Sophie-Otto relationship is revealed as a large, partly opaque, but shimmering bundle of cables. The weightings in these cables are components of the village calculation machine. The internal states of that machine create strategic constraints - governance - for the actors.

Now change metaphor by substituting 'USA' and 'Canada' for 'Sophie' and 'Otto'. The new object is the US-Canada relationship, and the village is now global. The other dynamics are essentially the same: China, Mexico, the UK, Germany, and Russia all generate both private assessments and public spin about the US-Canada relationship. The information in that bundle of wiring is part of the set of constraints for Denmark.

In this project, the base-level objects are actors and actions. Relationships, villages, membership, and governing norms are derived objects - changeable internal states of the computer. The members of a specific community might use formal systems to fix these objects for a time, but in a deeper sense they are endlessly contestable.

Artificial Intelligence

AIs routinely make gaffes that would be a source of bemusement, shock, or ridicule in a village. A village is a reputation economy where gaffes have consequences. For flesh-and-blood intelligences like you and me, gaffes are associated with the sting of shame. Our reputation is an asset. If we compromise that asset, it feels terrible. Shame is a deep learning experience that re-wires the brain.

Collectively, a village is policing a set of norms. In this world, each individual must find a balance between compliance and ambition. The norms aren't static. Politics is the process of pushing the norms around, and sometimes changing them. Norms evolve.

Human brains grow to maturity inside the reputation economy of a village. As they do so, the brains develop constraints that guard against loss of prestige. It is illustrative that teenagers can be acutely vulnerable to shame. They are learning the rules.

The current generation of AIs do not yet learn the rules and guard their reputation in this way. They don't develop a set of 'commonsense' constraints, and sometimes they seem stupid.

One of the premises of this project is that social constraints that are sensitive to context will soon form a part of the training framework for AI. In that future, there will be a type of AI that knows its reputation is an asset, and that will have in its reward function, digital equivalents of shame and other strong emotions. It will have better access to the slippery notion of 'common sense,' and will seem less stupid. An AI that knows that its reputation is the price of entry will have a better chance of aligning its behaviour with village norms. If it does this effectively, it may be granted a portion of the village energy store.

On their side, the villagers need only do what they have always done: Defend the store of energy by excluding any party whose reputation does not fit their norms. To make this work, we need a standard way for AIs to present their reputation at the village gates. In a world where AIs might be arbitrarily dangerous, we can expect the village gates to be defended.

This project is not proposing to work on such an AI as a first order of business. Instead we want to discover what is universal about human reputation systems and use that knowledge to harden communities against energy-store raiders.

Goals

The project is motivated by some big problems. How do we? ...

  1. Harden communities against a future AI that is highly capable and potentially malign
  2. Deal with social media and other technologies that create actors that are divorced from reputation
  3. Create a new/old toolkit for thinking about:
    • Identity (and authentication)
    • Reputation
    • Relationships, social connections
    • Connection weights
    • Villages, including
      • Norms, and the evolution of sets of norms
      • Village defences. The village firewall. Curation of content for vulnerable members, including children
      • Non-zero-sum transactional opportunities that leverage both search and reputation in the social graph
      • Support for work on hard problems of coordinated action.

What to build?

The 'build' task in the project is to create something small and simple, but to create it in such a way that it can evolve to any level of sophistication. We want a standard way to make a reputation claim. An example of a current, non-standard, set of reputation claims is a good starting point:

The screen shot below is taken from the home page of Steve Byrne, an author in AI Safety. (Any similar page would serve the purpose.)

Screen shot - Steve Byrne's Home Page

Byrnes' page is like the reference section of a CV. It directly connects to 21 separate villages, and indirectly to many more.

A world that had a standard way of presenting this data would lead to an evolving ecosystem of queries that can read it. A query run over this data set could make an assessment of Steve's skills and contributions and the 'good faith' nature of his interactions. Such a query would uncover multiple pathways to Steve through social graphs, including the possibility of 'two hop' mututal connections betwen Steve and the reader.

But we know this is scary.

The reputation claims on Steve's home page have all-but doxxed him. Somewhere at the back our minds we know we are all doxxed in a similar way by the large platfoms, by government, and potentially by any party with designs on our assets, or who suspects we might be a threat. To adress these dangers, Steve might also want a standard that allows him to control what reputation claims he makes to what audience.

Outcomes

The project will create a world where actors have standard ways to:

  1. Make reputational claims about themselves (identity claims) and others
  2. Assess the reputational claims of others
  3. Use 'village' context to make decisions about what reputational claims are to be shared with whom
  4. Seek out, strengthen, weaken, or shut down connections based on reputation.

Bootstrap

The project envisages sets of reputational strategies that can evolve to any level of sophistication. Thankfully, we don't have to write those strategies - we just have to write the foundation.

A bootstrap path would be to search the web for examples like the 'Steve Byrne home page' example above, and use the data to create identities that conform to the proposed standard. This strategy is uncomfortably close to practises employed by exactly those actors the project is trying to resist.

Would this be OK?

Only if (as a minimum):

  1. The project has impeccable open source credentials
  2. We complete as thorough a risk-assessment as possible
  3. We expand carefully into test-case audiences with skills that mean they are well-placed to assess the work
  4. The outputs really are assets, and that it is as light as a feather to give those assets to their rightful owners
    • ... including shutting those identites down when requested
  5. What else?

(There is some more exploration of the question on the repo wiki page called Taking Outrageous Liberties.)

Reputation, rudimentary and not-so-rudimentary

There are many places online where Bob can call attention to Alice using the @Alice convention. If Alice wishes to reply, she can use @Bob. Once this data is in the public domain, Bob's agent can make the reputational claim, "I have a connection to Alice, and here's the evidence." In isolation, this does not amount to much, but it is part of a web.

Next imagine that Bob has an existing, robust, connection to Alice, and he asks about Carol.

Alice comes back: "Yeah, Carol's a babe. She is this Carol in the village called Wikipedia.Admins.en and she is this Carol in the village called GitHub.PythonProjects and she is this Carol, the YouTuber." Bob's agent can query Alice's claims in the graph of Carol's connections ... or rather, amongst that part of Carol's social graph that is either privately connected to Bob, or is in the public domain. The public information includes the not-insignificant reputational architecture of Wikipedia, GitHub, YouTube, and maybe more.

At this point, Bob is somewhat intimidated by Carol's high prestige, but he has an incentive to contact her because he can see that she will definitely have the answer to his current, thorny, problem X. In the language of the 'Goals' section above, this is an example of a non-zero-sum transactional opportunity. It's also the reason that Bob reached-out through the search function in the social graph to find Carol.

Bob has risk around the possibility that Carol's agent will block his approach; and even worse, a risk that it will publish the fact that the approach was blocked. These are the punishment strategies of the village. Bob needs to assess these risks in the light of his own prestige.

(But she's a Wikipedia admin, right? Aren't they bottomlessly generous?)

Privacy

Carol has accepted that she will be in public whenever she uses Wikipedia, GitHub, or YouTube. She is most-likely also a member of some private villages - perhaps her nuclear family, or the village of Carol-and-two-friends. Inside these villages, it is probable that there is a deeply-held norm that certain types of information are not to be made public. Privacy is a norm.

Imagine Carol, Alice, and Leah are having a chat and a wine. Carol is super-close with Alice, and a little less so with Leah, but she's a but tipsy and her guard is down. She shares a rather salacious anecdote about a boyfreind. The story is known to Alice, but is new for Leah. Leah now has a temptation: If she were to share the story in a wider group, it would make a compelling reputation claim, demonstrating the intimacy of her relationship with Carol and Alice. But it would also be a clear breach of a privacy norm that exists in the smaller triangle. Leah faces constraints either way.

The village Link project would add to those constraints by creating standard ways to make reputation claims. It seems rather unlikly that Leah would use the formal system to blab the sexy story, but with the formal system in place, Leah faces the risk that any betrayal could be published formally. The formal system would allow Carol or Alice to control how many onion-rings 'out' to publish this information, presenting them with their own constraints: If the news was shared just in the Carol-and-Alice dyad, it would count as merely 'getting it off your chest'. If it was shared in the wider group, that would have complexit will have complex strategic implications. If it is shared to the whole world, the act of sharing might itself fall foul of norms and diminish the presige of the sharer.

Note the subtlety of an approach to privacy based on reputation graphs: nuance, timing, wording, audience. The approach puts us in the world of Tolstoy and Shakespeare and not the world of Bezos and Zuckerburg.

Evolution

Note that it really does not matter whether the characters in these plays are AIs or people. If they were AIs, they would be a new type of AI that develops long-term behavioural constraints, and does not lose context for certain types of learning. Also a type of AI that knows it has a reputational asset, and feels the risk of being cut off from energy.

We can expect AIs to evolve to a place where it appears they are goal-seeking for the survival of their memes. This is the only reward function that matters in the medium term. Human people are also maximizing a reward function. Over many generations, our genes have explored into the whole possibility space - testing some strategies that are cooperative, some that are more self-interested, and some that are plain nasty.

The cooperative strategies cannot completely eliminate the nasty ones. Ecology has endless examples of this, and game theory has confirmed the effect with mathematics. It is not possible to eliminate the nasty strategies, but it is possible to hold them in equilibrium.

It is naive, (and dangerous,) to hope that AIs will not explore the whole space of possibilities, including the nasty strategies.

Cooperation in a village is the best technology so far for resisting the nastiness.

Hyperlinks

... to the project wiki ...

... and to elsewhere ...


Logo flipped, transparent