Autonomous Software Agents as Trustees

Now we come to idea #2: immortality.

“Arthur’s brain could always be replaced,” said Benji reasonably, “if you think it’s important.”
“Yes, an electronic brain,” said Frankie, “a simple one would suffice.”
“A simple one!” wailed Arthur.
“Yeah,” said Zaphod with a sudden evil grin, “you’d just have to program it to say What? and I don’t understand and Where’s the tea? – who’d know the difference?”
“What?” cried Arthur, backing away still further.
“See what I mean?” said Zaphod and howled with pain because of something that Trillian did at that moment.
“I’d notice the difference,” said Arthur.
“No you wouldn’t,” said Frankie mouse, “you’d be programmed not to.”

My limited definition of immortality is – ensuring that a part of you lives on, so that your wishes are carried out well beyond your expiry date. The traditional way it has been done is

  1. Genetically
  2. Make a lot of money and establish a trust fund run by a bunch of lawyers whose firm has been around for centuries.

Both methods have their limitations. The genetic method involves creating a creature which develops its own ideas about what to do with your legacy. The other one has a high barrier to entry and is prone to creative interpretation on the part of trustees.

Now consider the breadth and depth of data available online and the increasing number of things possible to do simply by being connected to the Internet. It is possible write a program – essentially, an autonomous software agent trustee, an asat, which will

  1. manage the resources required to fund itself and its objectives.
  2. monitor the net for events
  3. carry out actions like selling stock, sending roses, periodic emails, giving money to individuals, institutions, charities based on triggers like timeouts and events visible on the net.

The legal framework would be the same as that used by trusts. It’s just that the human trustees now have a very simple job – they need to verify the legality of the program, host it somewhere where it has net access. This enables the trustees to deal with orders of magnitudes more clients than they possibly could if they had to function as “human executors” of “wills”, which is essentially the same thing. It also reduces the scope of creative interpretation.

Given Moore’s law, the cost of running the asat is next to nothing. Given compound interest, after several decades, its financial power will be far greater than you could achieve in your lifetime. Especially since it will use strategies which work well over really long terms (buy-and-hold), whereas impatient monkeys like you can’t resist the urge to meddle. You can then use it to do significant stuff without relying on your descendants. You can leave the bulk of your assets to your descendants in the traditional way and a small amount to power your asat. Compound interest will do the rest of the job.

The asat would use abstractions which would be meaningful across long periods (~100s of years) of time. The framework in which the asat is written will provide fixed APIs and implementations of these abstractions. The framework will need to be upgraded from time to time as the implementation of the abstractions changes, but the core logic can remain as-is.

Here is a simple example something which can be done by an asat:

  • Generate some amount of cash every year from an investment portfolio.
  • Pick some descendants at random, probably the younger ones. It’d know who your descendants were, from walking through the births/deaths/DNA fingerprint database. It is possible that you might be even be able to compute who your favourite great-great-great grandchild is, by looking online at their school scores, favourite books, toys etc.
  • Pick at random from the 10 most popular toys for that age which fits in the budget. (Today’s implementation in the framework: look at the Amazon top 10)
  • Buy and ship, optionally with a note and a sermon from great-great-great grandma appropriate for that occasion.

This simple example illustrates a few common characteristics of such agents:

  • They can identify people by a chain of trust which extends through time, with resources available online, and make decisions with a high degree of confidence.
  • They can make a good buying decision – appropriate for the time in which it is made – without having any conceivable idea at programming time, of what toys would exist or be popular 100 years hence.
  • The framework needs to translate a time-independent command like “give me a list of the 10 most popular gifts for 6-8 year olds under 100 dollars (indexed for inflation)” to an Amazon API call. 20 years later, this may be a lookup on some MegaGoogle API. Someone needs to keep updating the framework.

There are many interesting points about asats. I will talk about a few of them below.

  • One of the most important issues is that of legality. Human trustees would be ultimately legally responsible for the actions of the software agent. Therefore, one immediate barrier to agent complexity is that the code needs to be simple enough to verify and get certified (and recertified, on demand, when laws change) by a qualified human that its behaviour would be legal. One way is to translate the code to LEGAL, an Anglo-Saxon language full of whereases and heretofores. This LEGAL document can then be examined by the trustee – if he is comfortable with executing this piece of LEGAL, then all he is doing is outsourcing his execution to the software, which presumably functions as advertised. This is not such a big deal, since humans are called upon to validate, certify and audit the behaviour of very complex information systems for compliance to regulations specified in LEGAL. Anyway, there are going to be any number of tiny island republics with liberal laws, submarine fiber (and maybe nuclear deterrents), which will gladly host your asat for you.
  • You don’t need AI, strong or otherwise, to accomplish this. It’s possible with today’s technology. However, AI would certainly make it possible to specify more and more complex behaviour (approximating your own), although it would pass the behaviour verifiability event horizon at some point. Asymptotically, this would lead to the download-yourself-into-teh-intarweb immortality which strong AI and Singularity proponents are dangling in front of us.
  • A language for speaking to asats will emerge. Let’s say you want to build a hospital in Hingane Budruk. You would write a blog post or a press release, with a Request for Funding tagged with the appropriate keywords. Some asats would notice – through MegaGoogle news – that someone is planning a hospital in their home town. They would then put up some money towards it.
  • “Exploits” targeting such agents to get money out of them (“I am the ghost of MRS MARIAM ABACHA…”)
  • A stronger way to ensure that your asat doesn’t squander his money on fraudulent RFFs would be to create a chain of trust. For example, I trust the judgment of X, Y and Z for financial matters and assign scores to each to reflect my level of confidence. A, B and C for political matters. I will encode this, sign and publish my trust matrix. X, Y, Z, A, B and C will also do the same. Sooner or later, the trust chain will include younger and younger people. So some guy in 2100 may endorse a particular scheme and you can evaluate the value of his endorsement by computing the path through the trust graph and the weights on the edges. It won’t be just one guy, of course. Thousands will vote on schemes and by aggregating their judgment, your asat will have as good a way of getting sound advice as any. This is similar to the PageRank and other Web2.0 peer rating methods, except that it extends much further through the time dimension. Using the temporal chain of trust, you would even be able to take a stance on the then-extant politics and contribute to political causes.
  • How do you update the core logic? Again, the chain of trust will help. The agent could fork(), with the new copy getting some money and trying out some other “highly rated” code fragments for financial planning from other asats, etc. Asymptotically, this line of thought points towards DNAesque evolution.
  • The asat never tires, never wants to die, doesn’t waste money on vacations or health care. Asats may be among the richest people in a century or so and society may be dominated by “dead hands” (this, of course, may already be true).
  • The asat can have a presence in Second Life or other online virtual worlds. A soup of ELIZA, voice recordings, text to speech, some seed data would make for a really creepy experience. People generate massive quantities of content – photos, email, videos, which can be mined to provide some idea of their behaviour.
  • You can run one, or two or twenty such agents. You can start running them right away (“living trust”), with yourself as the human trustee responsible for guaranteeing their good behaviour.
  • Extra-legal agents using anonymous funding can be created to run in some offshore data haven. They can do a lot of mischief, like sponsoring terrorist activities, verifying their occurence via news events, then paying off the perpetrator…
  • Collaboration between asats. You might help out asats of your “friends” (computed through the chain of trust), descendants, peers who are in danger of extinction due to some imprudent investments. Or collaborate on filtering events, financing ventures, etc.

Technology can empower millions of people to create asats, just as millions of them create avatars, Sims or tamagotchis today. Extending it to a solution which can function as a will will require you to grandfather in a company of solicitors which has a hundred-year track record, in addition to developing the framework, the language, the chain-of-trust, etc. This is the standard way for new insurance companies to acquire a veneer of respectable age and stability – they buy up the tailor whose great-great-grandfather made chaddis for Mangal Pandey and proudly claim “Covering your assets since 1857”.

Go ahead, play God. Program for eternity!

Advertisements

3 responses to “Autonomous Software Agents as Trustees

  1. Interestingly – asat in Sanskrit means “untruth” or something similar. I’m no Sanskrit scholar, so pardon me if I haven’t explored some deep nonexistent philosophical connotation of the word – no, mystic intonation of psychically resonant syllables.

  2. Funny you should mention mystic intonations of resonant syllables – I vividly remember the Discovery of India TV series with Roshan Seth, which had a background the ancient Aryans chanting in Hindi,

    Srishti mein pehle sat nahin tha,
    Asat bhi nahin
    Antariskh bhi nahin tha!

    “Untruth”, “unreal”, isn’t a bad term for something which is not-alive, at least in our parochial perspective.

  3. Roshan Seth made a convincing Nehru.

    The music was by Vanraj Bhatia. And I remember the chant ended in a chorus that went something like

    … hai kisi ko nahin pata,
    nahin hai pata,
    nahin hai pata

    .

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s