The uncanny problem with autonomous AI

P = NP is a simple and short notation for a problem that “asks whether every problem whose solution can be quickly verified by a computer (NP) can also be quickly solved by a computer (P)”. P = NP means yes, P != NP means no.

The golden rule is NP. A moral compass is P.

The golden rule is NP because the golden rule is individual. Each and every experience makes a new golden rule.

Unless P = NP, you can’t program the golden rule.

A computer with a procedural moral compass is a tool of the compass builder.

A computer with a machine learning based moral compass will kill people because people kill people.

The golden rule is a built in evolutionary feature. Societies build moral compasses. The question is why would you need a moral compass if you have the golden rule? Because the golden rule is a generic solver, while a moral compass is a particular one.

Act only according to that maxim whereby you can, at the same time, will that it should become a universal law. Right Soren?

The three forms of the golden rule:

One should treat others as one would like others to treat oneself (positive or directive form).

One should not treat others in ways that one would not like to be treated (negative or prohibitive form).

What you wish upon others, you wish upon yourself (empathic or responsive form).

Asimov’s laws of robotics:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A moral compass is:

An inner sense which distinguishes what is right from what is wrong, functioning as a guide (like the needle of a compass) for morally appropriate behavior.

A person, belief system, etc. serving as a guide for morally appropriate behavior.

The full range of virtues, vices, or actions which may affect others and which are available as choices (like the directions on the face of a compass) to a person, to a group, or to people in general.

Autonomous AI becomes a problem when they have reach. The moment it becomes a self contained execution with reach we’re hitting a wall of perception: we’ll require of that AI the same behavior we expect from all other aware creatures. There are only humans that fit this model and humans have a reason to follow the golden rule and an incentive to follow the moral compass.

In computer programming, any kind of programming, adding reason or incentive is easy. The problem is computing.

In Asimov’s laws of robotics there is a great potential for problems in computing, like infinite recursiveness. For example:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

What if a robot witnesses a human about to kill another human and the only option of intervention would kill the attacker? The robot can’t intervene because it may not injure a human being. The robot can’t not intervene because it cannot allow a human being to come to harm. This is not a strong problem, it may be solvable, with special cases, which makes of robots simple expert systems

Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.

This doesn’t work because we are working on feedback. Feedback is immediate. That reasoning above is complex. You cannot calculate that maxim everytime. That is why the trolley problem is hard, there is no enforcing feedback for either option, because you cannot measure success, unless you make another expert system where each human has a computed value, and by adding up points you get a decision by comparing numbers.

A moral compass, however, is completely programmable. You can make each vice or virtue we know into a piece of code, and it will compute easily what to apply to each, and at the same time we can generate a feedback system to to actually teach the way of the compass.

An autonomous AI with a programmed moral compass will be able to distinguish right from wrong. But it will never be able to exhibit the Golden Rule. That is why there will be no empathy from a machine. There will be, no matter how fine the software, a moment when the uncanniness shall shriek our souls into a horror of alienation.

Unless P = NP. Because if all problems are solvable in polynomial time then there is the possibility of finding a technology fast enough to compute the golden rule in real time at human level requirements.

Conscious machine

So there is this intelligent body and this consciousness that must somehow work together. The body doesn’t care much about its eventual expiration, it cares a lot about present integrity. The consciousness cares less about present integrity but is scared shitless of the incoming end.

Intelligence is evolution at work. But consciousness is a side effect that rose up from the predictive analysis which intelligence enabled. Your body knows what it does. It knows it so well it doesn’t care much about you, as a person. No cell asks you if you’re happy with its DNA reading. You are built on a scripted execution environment.

At the level of the body prediction is minimal. There is possibly some kind of intelligence at the level of matter and some primal form at cellular level, but there are no unscripted decisions in the formation, maturing and living processes of a being. If it were you’d start growing a skin pocket for that cell phone by now. The homeostasis solution is known and at this level, in this environment, awareness is not required. Levels, this word repeated in this paragraph is essential to grasp: we’re layered things.

Our internal bodily processes happen without any supervision from what we call “us”. Isn’t that weird? It is amazing that most of the known diseases would be cured much easier if we could “tell” the body to stop doing something. The freaking flu happens because of too much immune response, wouldn’t it be nice to tell your body to stop overreacting?

Why don’t we have a dashboard with status parameters? Because what we call “I” or “me” is merely a small side effect of a tool the body thought would be nice to use: consciousness. We have this illusion of free will because the body does not oppose anything in particular, unless you direct your self at you. Suicide and self harm are the exception not the rule.

We’re a thin film on top of everything and the everything we’re on top of doesn’t seem to share our goals and aspirations. It is important to realize how external to your body you are. That out of body thing reported so much, that is the constant thing happening. The senses are really immersive yet what you call you is out there already. This fluctuating electrochemical equilibrium that defines you is not made of the same thing the supporting layers are made of.

You are an intelligent machinery, with billion years old scripts executing non stop in perfect order and some error correction with highly accurate short term prediction abilities, stuck to this consciousness which is a small thing busy with being itself while depending on the intelligent machinery for its existence.

We have a dream, a waking dream actually, of cohesion, as if we’re the same, integral, contiguous, smooth, inside and outside, at all times. The intelligent machine is. The consciousness is. By themselves. Together the cohesiveness breaks.

Freedom. I find it funny when I think of some cell locked into its place and purpose, controlled by the greater good’s aim of resilience. We see this interference of the machine with the conscious aware being in the way we model what we call the world: always coming back to units with easy to understand designated functions, always striving to define and box freedom into degrees that are painless to stretch into, continuously finding solutions that work on old maladjusted systems, resistant to environment but frail on the inside.

Higher conscious levels don’t have the access required to devise a clear system. Most of what we call “I” and “me” is so busy being separate that we rely in huge proportion on insight offered by unconscious and subconscious processes.

In a world made by fully aware conscious superior beings there is only spontaneous structure, the memory is collective, knowledge freely shared, goals vary and the greater good is the shared understanding of the uncertainty ahead. But there is no such thing so far.

We’re passengers on a vessel with a known destination, except that we don’t want to get there. Some are the noble guests on the deck of this vessel. some clandestine in the hideouts of the machine’s belly. Above and below though we share the impotence of changing course.

You know, Fabiana I wrote this after going through the constellation based G universal database and trust network by Heather Marsh. Unsure of the connection :D, anyway …

My language

A white rose smells like living. I breathe it in to remind myself what it is to be missed from this dense embrace of Right-here-right-now.

A red rose looks like lure. I stare at it so that the fire in me sees it and tries to imitate it. Monkey see, monkey do can be used for good.

A lily looks like lust and smells like letting go. One can use them to let it go, literally.

The most relaxing sound is the laughter of a human challenged on their strongest territory, that innocent giggle mixed with the deeper spasms of hubris, evoking the joy of being close enough to the wounded gazelle.

In early mornings when the air is cool and it smells like dew, when the night faded, yet daylight is still not paying attention to me, I feel like me.

A tree is a thought, a forest an entire mind. I like to remember that they grow downwards, grounded in thin air and stuck with their roots searching for the center of the world.

A deep carpet of dead leafs is the unexplainable and the wind swirling them into invisible spirals is the fascination the unexplainable begets.

Metaphor is the map we use to navigate our geography of knowledge. A language is some signage on the map: which are the roads to walk on, what depicts topography, where is there some water around. I also carry my compass of belief on which I see which way is north. When my beliefs shatter, I can always look up to the sky inside myself to search for the shine of my heart, which is visible only at night, and always points my north. When I believe, it’s a constant day inside! It’s the angst that sets the sun of my drive.

We’re metaphor first creatures. I am one at least. Will you be one too? It’s more fun, the more you learn, to map the knowledge together.

Does a cat’s purr and thinly sliced pupils make you feel like they know everything and won’t tell? Does a child’s laughter look like spring? Only sometimes?

A warm sand beach with playful waves poking at it in relentless joy is the home I long for. Unlike Camus I cannot lose the sea, I never had it. Maybe only my sea of wonder, but this kind of sea doesn’t make poverty sumptuous for me. Poverty is still the wasteland incarceration of potential.

I would be a good space traveler because I love long walks. Long walks consume eagerness away. Eagerness is tiresome, like the apple fruit that hangs so heavy that it almost breaks the branch, but won’t just fall already, like that rosebud that failed to spark the blooming and it rests frozen into its failure.

You cannot find Jesus. I am dead serious. Once Jesus gets on the map, it gets in the map. Not a christian? Fear not, all religion will remap the map, for they all spill their supreme metaphors on the ones you carefully have drawn so far, smudging your knowledge, asking for your determined detachment right after it blew into the whims of the wind the intricate sand mandala of your soul.

See above? You can teleport inside your geography of knowledge: the subject is the portal. Unhappy with your setting? Change the subject. Are you the subject? Are you a subject?

I like to look at hands doing things. They are, for me, the reason we grew this world inside. We owe our brains to our fingers. When life got fingers it could finally put the universe on its pottery wheel.

Look at these letters right in front of you, these signs with the power of triggering your feelings, of changing your mind, of calling you into a place of common exploration, of rising your blood pressure, of making you blush because someone might know they made you horny and or childish. A letter, a sometimes curly line summoning then fiddling with time itself. See the efficiency of life?

Communication is all about a one on one shared map make believe.

My language is not made of letters though. I am a poor cartographer. On purpose. The roads of my knowledge map, all lead to treasure hunts. The water is sometimes an illusive oasis, other times a river of seas to get lost into. Cities of things I think about, villages of things I never understood, wilderness of serendipity testing the cadence of the sane cause and effect rhythm of my constant questioning, aroused by this silent reality’s ends that just won’t meet.

Ergo sum. That’s all we know.

Unfortunately we don’t have direct knowledge on the first part, that is to say, I don’t have a direct knowledge of my own thinking because I am trapped in it. The moment my first thought appeared, it started to weave the layers of layers of personality, from the inside to the outside.

I don’t know if I think, or I feel, or I grasp, or I sense, or I act or or or. We can understand that we’re doing, whatever it is we’re doing, but direct knowledge assumes examination from an outside reference, and we cannot be outside our thought processes.

Meditation elevates one to upper layers, but not outside, because outside the thinking loop you cease to be you, therefore whatever is that which is looking will not pass back the knowledge of what it saw.

Limiting our basic philosophical perspective to ergo sum is a healthy choice. It is akin to:

“Eat food. Not too much. Mostly plants.”, M. Pollan

because right now that is all that our direct knowledge of how the complex relations between the billion moving parts of our metabolism work.

That’s all we know.

Perhaps, in the creation of artificial life and of the corresponding super artificial intelligence accompanying it, we’ll get that outside reference on the process of thinking that will validate or question the foundation laid by mr. Descartes.

The return of the mainframe?

I wonder will PCs revert to being dumb terminals with a twist.

I imagine buying this hardware that can’t do much, like a chrome book, that connects to everything and learns all there is online about me and it “knows” what to do next.

I connect to Facebook and suddenly my computer knows my funny bone.

I connect to Twitter and suddenly my computer knows my interests.

Just type in your Google account and we’ll teach your machine about who you are.

I imagine this future and I feel it coming at light speed. With Intel and Nvidia cranking out chips tuned for ML, with Google and Apple making farms on top of farms of computers built only for ML, the basic software, the operating system has to be just a pack of protocols and a browser which sports ML specific APIs that can access ML specific hardware, including ML specific storage.

I don’t believe in computers that won’t work without the internet. No matter how advanced we get in this area, I mean even if Facebook Free Basics will eventually break through and informationally enslave, or colonize, half of the planet, the idea of moving data through air or cables is prone to out of service events. Unless we begin to teleport electrons into machines, we’ll need offline capabilities.

But I do believe in computers that use the internet to become better. This is already the case for a decade. But ML and in particular industrial, global, privacy shredding ML will make computers actually become better online, not simply being more useful which is the case for now. Software will be about the same thing: learning. What do we want the computer for? Do we not want a wizard, an Oracle, that knows in advance what we need, instantly what we want and always what we mean? We don’t need productivity. We need tools that make what is required of us, in our place.

Machines need not be “creative”. We got all the creativity the world requires. Because it is our world. When machines will get creative, it will be their world. You know why, don’t you? Yes, it is because that is what creativity does: building worlds inside the information space. Creative machines will start making their own worlds.

But, this is reality not the far future at the end of A.I., the movie, with those jelly robots who couldn’t reverse that Ice Age. I hope you’ve seen the movie only if for that final act.

In reality, my future computer will be some kind of interactive machine which will support any kind of input and make sense out of it, not because it “thinks” like a human, but because it “learns” like one. We are heading in the future where your service providers will know you better than you know yourself. They already do, but we’re blessed with bad practices of interoperability that still keep data in silos. For now.

I amuse myself with the idea that the keyboard for future generations will be an “accessibility” option. Think about a 3D space where a ML tuned system predicts with 90% accuracy the next word in your sentence and if a checkbox is checked, predicts the entire thought line in less than 300 milliseconds, building on your chat history, which it has learned from since you were six and your parents gave you your first iDevice (or mDevice, who knows). Why type? Why speak? We already have an inner representation of a linguistic 3D space in our brains. That is why we like to move our hands and our bodies, to redraw the imaginary route between how concepts are connected inside.

If ML turns out to work consistently and reliably on small form factors every giant software company will be able to crank out their own chips because it will be worth it, because each consumer will want various ML enabled devices for all kinds of things. The home hub everyone tries to invent might be GPUs stuffed in an internet connected box, and the brand to make it will be the next Apple.

The problem with the entire internet becoming a huge virtual assistant is who will pamper our soft inside?

Why you should be bored of this revolution

maybe I am not allowed to use this image …

It is the time of doers.

It’s gonna take a while for the age of thinkers to come again.

Every success story today is more about accomplishing what we call “wealth creation”, and less about what we call “good creation”.

Sure, maybe you are one who thinks that today is everything, that we are at some peak, unable to fall, or one who dwells passively in the sweet illusion of future prosperity, prosperity brought about by the fresh new princes of the world, or hey, maybe you are one who believes in basic income — to any of which I say, maybe.

I know that wealth creation has been here before.

I also know that the antithesis of wealth versus good is not immediately apparent. It can actually be shocking.

One is inclined to think that wealth is good. Of course, you take the cautionary steps and mention that wealth must be used for good, that wealth is not good by itself, that wealth should be shared to be good, but with all these safety nets on, we generally conclude that wealth is good.

But it ain’t.

Actually, in a social sense wealth is the opposite of good, wealth is evil.

First, wealth is not money! Precisely, wealth is physically fenced matter. Here is a way to explain:

wealth — the last stage of owning, which breaks matter from the universe for the single use of a person.

Like this:

1) free — keep from none
2) worthy— keep from one
3) earned — keep from some
4) property — keep from many
5) wealth — keep from all

Unlike property, wealth just sits and does nothing much.

Wealth is as useless as a pedestal. Think about it. All that matter below a human only so that one is higher than another …

Wealth is when property becomes used as bricks in a wall. Wealth is like the tomb competition in Egypt, where, for no logical reason, work, time, effort, planning, all got sucked into fencing matter to make a huge pyramid of stone.

Property is productive, property advances humans, but wealth is a force of stagnation.

Usually wealth is not real. It resides in illusory beliefs in the value of some things. This is different from property. But for this very specific reason wealth is fenced matter: we keep property out of production so we don’t suddenly see it lose its illusory evaluation.

Not only people act poorly on wealth. Entire countries do too. “Our land”, “our country”, “war” is about preserving unchanged an illusory drawing on a map for fencing matter.

The world has seen times like this before, times of doer’s revolutions. We are, it seems, bound to make the same mistakes, not ideologically, not conceptually, but emotionally.

We suck at evolving beyond the immediate.

But, if wealth is evil, how to still satisfy our innate greed?

Greed. You have it.

Greed masquerades as hunger, but, in reality, we sublimate the primal greed of eating for future hunger, into fine and abstract forms of greed, around the end of childhood.

That moment when greed defined you, it was the loss of your innocence.

Why are we so shy in acknowledging greed? It is not a flaw, it is a survival mechanism and it doesn’t work right in times of abundance.

Greed is so basic that it defines an entire palette of emotions and sentiments.

Pride, the greed for love. Lust, the greed for safety. Vanity, the greed for embraces. Gluttony, the greed for touch.

Why do we shame greed, instead of cultivating it?

It is the same thing we do to our poor ego. Fighting against our ego since Vedic times, with no results.

If “ego fighting” would be a company, it should be Yahoo. When it was a highly valued behemoth, all it did was to suffocate promising businesses, in our case, the “ego fighting”, promising paths for spiritual achievement, gulping on them with teleological arguments, constantly capitalizing on the touted purposelessness of the ego. Do you know why it worked? Because, we are all our egos, we are the purpose, so of course we don’t see it in our subjective damnation of it.

Sickly Ego + Primitive Greed = Hoarding

Back to our revolution. When doers get a revolution you get great things like electricity, mining enterprises, steel mills, mobile work forces, cars, lots of cars and machinery, items made of other items, lots of stagnating progress.

Stagnating progress is the progress that produces wealth.

There are huge opportunities in the times of the revolutions made by doers. But, as opportunity is when luck meets preparation, only a few get to experience the fruits born out of innovation. Then a larger group will enjoy the fruits of access. Then an even larger group will enjoy the fruits of disruption.

Do you know there is no such thing as “industry disruption”? Yes, there is only economic and social disruption, caused by winding down old industries and ramping up new ones.

The disassembly line for whole industries is not disruption, it is leverage.

But the world as a whole, during the revolutions of the doers, halts.

Doers need to sell. To sell, you shout louder than the other doers. To be heard, you make a clearer message. To make the message clearer you make all communication static. Advertising copy becomes political discourse. Nobody wants to “complicate things” and you rise drivel to intellectual achievement.

We have a problem.

The problem is that people cater to stupidity when they lose sight of stupidity.

Take the UK’s vote. Brexit was a bad idea and the people who chose a bad idea clearly are not smart enough to see it was a bad idea. There is no reason to pamper this and cater to stupidity by overarching histories of class inequality in the UK or about the shortcomings of bureaucracy in the EU. The British could have done many other things better than the stupid one they voted for. People who defend the “brexit” voters cater to stupidity.

Why does this matter?

It’s the damn drivel. Drivel as intellectual achievement is when we lose sight of the fact that we’re not bright anymore. And we start to defend stupidity, we start to elevate it to higher levels of awareness, we start to draw lessons from randomness, forgetting our order maker natures.

In my opinion, the Eurosceptic movement or the ascension of nationalist extremism in the US are perfect examples of the sustained sun setting of the mind.

Why do we become chained to stupidity? Because during times of doer revolutions we rejoice practical novelty and in doing so we stifle intellectual novelty. Sure, no one will ever admit to it. Everyone is afraid of being labeled an Intellectual Yet Idiot. During times of doer revolutions we rejoice positive news.

Thinking of problems should be done with a mandatory perma-smile.

Artificial sincerity

– Mirror mirror on the wall who’s the dumbest of them all?

You are.

– Fuck you!

You’re shitting me. WTF, wasn’t you the one who asked?

– Well yeah, but I didn’t expect such sincerity. Maybe a speech on the relativity of stupidity would have done a better job ….


– Well, what the fuck are you a smart mirror for if all you can say is “yew arr”.

I am a smart internet enabled mirror and I have analyzed your recent behavior and decisions and based on your intonation and sentence structure I inferred you meant to calibrate your current intellectual state. This analysis was statistically compared with data from over three hundred million people who allow their stupidity level collected online. Next time I will be less familiar.

– You mean, how stupid am I?


What does being “alive” mean?

There has been a great deal of discussion about artificial intelligence and the potential dangers it poses to humans. Then there have been countless iterations of interpreting via stories moral lock downs such as robot rights, slave robots, artificial will and so on. But the main question regarding A.I. is

when exactly does A.I. become more than just a fancy tool?

My main hypothesis is that artificial intelligence will not be more than a query answering robot unless we factor in it unknowns and inevitable, time based, cycle endings. In particular we need a system who fights for its existence, a system that runs in a scarce resource, competitive environment, even more, a system that cannot access all the answers.

Artificial life is not only intelligent, it has the same goal as life in general: to defeat time because it is threatened by it. And, when we’ll create it, the artificial part will be lost, because, in the end, we’re meant to do it as our next evolutionary action towards time resilience. (more)

In that sense, we could say something is alive if:

  1. the base goal is to preserve low entropy
  2. all actions are determined by a predestined time resilience
  3. it is exposed directly to the effects of time, in particular decay

Even if a being is immortal, all three of above can still apply, and the being is alive.

The goal of any living being is resilience as a form of preserving its status.

the formula of life

where R is resilience, S is status and P is preservation. To ease calculation R, S and P can be integers, but in reality R is a time delta, S is a matrix and P is an algorithm.

Status is a complex notion that sums up genetic, biologic, cultural, societal and other states. Preservation is an algorithm that employs innovation, mutation and other methods of predictive feedback initiation. Both status and preservation change in time based on environment updates.

At birth a living being has P = 0, therefore R = 1.
At the time of death, a living being has S = 0, therefore R = 0.

If we create an A.I. system and aim to make it work above the limitations of a fully determined program, we require the incorporation of the formula of “life” in the main loop. The stability of the main loop must decay due to direct interaction with a physical hardware clock.

A decaying main loop is created by updating a composite variable on every tick of the hardware clock. Say we have a variable made of many parts such as A..n then there is a homeostasis function H that produces a result of a composite variable A..n.

A..n = H(A..n, S, E), where S is state and E is environment

Each part of the composite variable is then injected in the loop as values for various internal parameters. The H function applies data from state and environment to A..n. State is the current execution state and environment is the input of the current execution state from sensors and detectors.

Thus resilience is not embedded in a system, it rises naturally from decay based on external action. The better the preservation the bigger, exponentially, the resilience. The better the status (both complexity and connections, as status is a sum of states) the better the resilience.

If we build an A.I. that is generating questions, not answers, using answers simply to point to new questions awareness should arise by itself, but only in the presence of decay and resilience, otherwise we’ll never know if the system is aware as it has no reason to expose it.

On awareness

Awareness has to proceed intelligence, if there’s no awareness there’s no intelligence even if the awareness comes from the observer.


Why so? Is a dog aware? How about a gold fish?

Awareness, as I defined it, is the running “I don’t know”, that is the amount of uncertainty which accompanies the brain’s constant predictions. On the other hand, intelligence is directly proportional with the depth of our future’s horizon, how far into the future can we see?

I think a dog is aware, but less aware than a human. A gold fish is aware, but less aware than the dog. Their intelligence level allow for short term predictive guesswork and are highly dependent on learned behavior.

Take a pet for example. They wait on their owners to come home every single day. There is a level of awareness about what is going on that sometimes looks amazing: my dogs know weekends from weekdays, they anticipate when we’ll take them in a trip and so on. But none of my dogs will doubt the order that they’re in: this is how it is and they settle for it.

Awareness is connected to intelligence, but not preceeded by it. There can be intelligence without awareness, a child is the best example. A child is highly intelligent but because it has not learned enough about the extent of the environment their awareness levels are low: uncertainty is low, the schedule is the schedule, parents are parents, home is home and playground is playground. A child’d awareness rises in play when the game’s effect of imaginary world building increases the amount of uncertainty.