Sunday, November 6, 2011

Singularity Video

A little ditty I came across during one of my manic internet search sessions:

Saturday, November 5, 2011

AI and the Evolution of Human Mores

To Whom It May Concern at the Singularity Institute,

Perhaps half an hour ago, I came across one of your blog posts.  It was entitled: 

Interview with New Singularity Institute Research Fellow Luke Muehlhauser: September 2011.


At some point during my reading, I thought of an idea pertaining to the programming of AI morality (let me admit here that I have virtually no programming experience, save for the work I did with Integrated Stat 9, an Econometrics program I used as an undergraduate at Tufts University.

The article referred to a number of potential pitfalls, one of which is as follows:

2)  Instilling the capacity for moral/ethical analysis requires not only the reconciliation of more than 7 billion contemporary unique viewpoints, but also the consideration as to whether the morality and ethical perspectives of today's population is necessarily the ideal model from which to base the moral/ethical analysis performed by tomorrow's AI.

I believe that, with respect to the former, neural networking and/or multivariate analysis will do the trick.

The latter issue presents a more difficult challenge, but I have an idea that may help to solve the dilemma:

The outcomes of moral/ethical analysis depend upon the programmed perspective from which the AI makes the analysis.  Obviously, relying solely upon the guidelines consistent with those of ancient Egypt would produce an output divergent from that which we'd see if we were to rely upon those elucidated by the Code of Hammurabi; these, in turn,  would diverge from those of ancient Sparta, which would diverge from those of ancient Athens.  Each of these would diverge from the outputs commiserate with English Common Law, which in turn would differ from those of the Bill of Rights, the Constitution of the United States, its amendments, and U.S. case law.  Each of these would necessarily diverge from contemporary moral/ethical analysis.

With the dynamic history of human thought, a question arises: how are we to reconcile these inconsistencies?

I surmise that we must instantiate a model a dynamic moral/ethical model that analyzes historical cases, provides us with current mores, and extrapolates those of the future.  Now, how might this be accomplished?

There are a multitude of means by which such a model might be created but, in the time I've had since reading your blog post, I've come up with the following system:

Since history provides us with myriad examples of situations requiring decisions based upon knowable circumstances, we have the ability to record the situations from which these decisions arose, as well as the outcomes of these situations.  This would provide us with a historical framework from which each decision spawned.  Using multivariate analysis, we can determine the most relevant variables contemporary to the decision, as well as the relative importance of each of these variables.  Once our model's r^2 is as high as it can be (that is, once our regression curve gives us as accurate outcomes at the points along the curve as we can achieve) given our desire to minimize its variance, we can then extrapolate the likely evolution of humanity's moral/ethical decision-making processes with an eye towards the future.  With this model, we could then simulate the contemporaneous "ideal" decisions given the circumstances from which they are derived.  To evaluate the quality of the multivariate function our analysis produces, we can then compare our simulations of "ideal" outcomes with the actual ones, evaluating both the subjective quality of these decisions and their ramifications with respect to the evolution of our model.  We might also choose a number chronological points at which hypothetical decisions are to be made and predict the expected outcomes of these situations.

Once the model is deemed satisfactory, we could then use it to actually make court decisions and/or case law based on our expectations for contemporary mores.  This would not only make the outcomes consistent with the history of human morality/ethics, but it would also ensure that court decisions that reflect the evolutionary forces that shape moral/ethical decisions, reduce public dissent with respect to those decisions enacted, and prevent potentially regressive legal precedent spawning from the limited number of perspectives now involved in the decision-making process (a la the Supreme Court of the United States).

Many might suggest that pure democracy would render such a model superfluous, and the claim would have some validity: after all, whenever a legal decision concerning the population at large is made, it will necessarily be a contemporary decision, reflecting contemporary values.  I would counter that leaving these decisions to we the people who, despite all claims that we are rational animals, are also prone to the influence of emotions, which replaces logicality with illogicality, rationality with irrationality, allowing for the undue influence of current but transient sentiment on our decision-making process (one can point to the temporary increase in American nationalism and the discontinuous spike in  support for unprecedented infringements of individuals' rights to privacy, new and radical surveillance measures, and unethical legislation.

I have very little idea as to the Singularity Institute's influence on policy initiatives, but as the technological singularity approaches, more credence will be given to Singularitarian foresight.  Thus, if this idea circulates among us and, eventually, the public at large, we might find ourselves capable of influencing, in a positive manner, the future of human civilization.

Best,

The Omega Point