07 August 2009

Awareness of System Boundaries is Necessary for Success

A system is where you define it. Sometimes it's easier for people to agree on the boundaries of the system, sometimes it's harder, but either way it's always arbitrary. In keeping with the fractal nature of systems, the subsystem boundaries are also arbitrary.

The definition of a system's performance depends on its boundary. A car's performance is measured in miles per hour because the boundary of the "car system" is between the tire and the road. We could say that the car actually stops at the axle and that the wheels are a separate system. Then the performance of the "car system" would be measured in revolutions per minute. However, car and the wheels are generally considered the system. On the other hand, when we talk about a highway at rush hour the cars are considered subsystems of the traffic jam. Alternatively, an company's organizational chart is an illustration of subsystems within systems.

People who are in charge of a subsystem will generally consider themselves in charge of a system. When they strive to do the best job possible they will usually try to optimize the performance of their system. Just like the performance of the axles in a car is measured differently than the performance of the tires, the performance of one group is measured differently from the performance of the larger group it is a part of. The person in charge of the subsystem can't measure the performance of the system, because that's not where they are; all they have to work with is the performance of their subsystem.

This is a problem because to optimize the performance of a system you must de-optimize the performance of all the subsystems.

For example, a "tuned" car doesn't have the most powerful engine because it would rip the transmission apart. If the transmission were beefed up it would spin the tires instead of moving forward. If the tires were stickier it would warp the frame. If the frame were reinforced it wouldn't leave enough space for the big engine, or it would weigh too much and it would need a bigger engine, starting the cycle over again.

The Tsar tank. More like the reTSARded tank! Am I right?

A system must have subsystems that are in balance with each other based on the performance goals of the system, not on the performance goals of the individual subsystems. This is relatively easy to understand when the systems are not people. But as soon as people get involved they start to get all pissy about being a subsystem rather than a system.

This is why the executives of companies are constantly being reminded, often by highly overpaid consultants, that they have to explain to employees how their actions affect the company's overall goals. Otherwise, all they have to go on is the performance of the system they are aware of, which is the one they happen to be in charge of. When they do their best they will actually be destabilizing the company.

BTW, this is why companies alternately claim it is better to keep their employees powerless and scared, or empowered and brave, depending on which extreme they are already closer to. A company that judges its employees on how well they aid the overall goals will strengthen the company by empowering everyone. A company that judges its employees on how well they perform on their section's individual metrics will strengthen the company by squashing everyone.

06 August 2009

Spontaneous Misorganization Stifles Innovation at Large Companies

New ideas rarely emerge from bureaucracies. Large companies are generally bureaucratic, therefore new ideas rarely emerge from large companies. This is because people suck.

Pictured: People. Sucking.

A person doesn't suck (usually), but people do. The more persons are involved in something the more likely it is to suck. Observers often say that culture is the reason small companies are better at innovation than large companies; like Bruce Nussbaum, Marty Cagan, and Jeffrey Baumgartner. (respectively)
[large companies] don’t understand the critical cultural and social science components of [innovation].

...there is much that the typical large company could do to improve the ability of their employees to innovate.

The culprit behind this discrepancy is the decision making structure in each kind of company.
It's not culture, at least that's not the first cause, it's the number of people. That's not to say it's the number of people technically grouped together. What is important is the number of people who have power over the outcome. As a general rule, people all want something different. It is impossible for everyone in a group to have the power to get what they want, but it is possible for everyone in a group to have the power to stop the process and ensure no one else gets anything. We have a built-in feeling of what is fair or not and we act on it by refusing to accept deals in which we get something, but the something is less than we feel is fair. Give enough people power over the outcome and nothing will ever get done.

Small companies tend to have few people in them. They're "streamlined." That's a nice way of saying they have fired, or simply never hired, people they didn't need. Large companies tend to have a lot of people. They're "bloated." That's an acceptably crude way of saying there are too many people involved in what's going on. large companies provide a reliable flow of income, so they attract the people who can't hack it anywhere else. That's fine as long as they do their well-defined job and nothing else.

You can have some power when the guy above you either loses all of it or gets more.

A lot of people becomes "bloat" when those people start getting the power to make decisions affecting their own job. Since their job is all they can handle, they are not going to make decisions which result in changes to their job. This is not to say that hiring more people is bad, only that distributing power over one decision is bad, and more people hanging around makes it more likely power will end up distributed.

Large companies can innovate just fine, and do, when they realize that throwing more people at a problem actually makes it worse. As groups grow larger, and more secure in their position, they become much more likely to spontaneously mis-organize themselves.

For example, as a company grows it tends to take on larger and more complicated projects, which necissarily involve more people with specific expertise than before. That's as it should be, the mistake is giving all those people power over the project. The appropriate way to organize it is to give one person power over the project, and make sure they get good advice from all the experts. This takes deliberate structuring because anyone who feels crucial to a project's success will feel it is unfair that they have no official power over it. This makes it much more likely one of them will demand, and get, some measure of power. That makes it more likely the others will demand, and get, the same.

Then they will use their power to stop the project when it doesn't meet their standards, which it inevitably won't, as I'll explain in the next post.

05 August 2009

National Healthcare Reform Leadership

A new national healthcare system is in the works, or at least a modified one. Which is good, because no matter how you slice it the country needs to do something about steadily increasing healthcare costs, says the CBO.

If rising healthcare costs were a steamroller. . .

In BusinessWeek, Nikos Mourkogiannis proposes that the new system should focus on cutting costs. He also says that, while that is a fairly obvious consensus view, actually implementing it will require prodigious acts of leadership. The general idea is to create a system that ensures the average person will have a minimum level of benefits.

As Mourkogiannis points out, the new healthcare system will not be able to do everything for everyone, it will have to make triage decisions which first reduce costs (and do everything else second). From the White House:
President Obama is committed to working with Congress to pass comprehensive health reform in his first year in order to control rising health care costs, guarantee choice of doctor, and assure high-quality, affordable health care for all Americans.
The fun thing about mission statements is that they often utilize a lot of commas. Giving commas to a bureaucrat is the linguistic equivalent of giving a credit card to a teenager. All sorts of commitments are made with little consideration given to whether or not they can all be delivered. The term "high-quality" has a lot more wiggle-room than "affordable," and the situation demands the focus be on "affordable" anyway, so "high-quality" is really only in there to attempt to placate fears that the healthcare storm troopers are going to drag you off to the crematorium when you reach 65.
Your grandpa ran off to join the circus. Here's his replacement.

High-quality will have to be secondary to low cost. The only reason we need healthcare reform is that our current approach will bankrupt us. So, at a minimum, we have to do the same thing only cheaper. Improving quality would be nice, but it is not the primary driver; cost is.

Now, try explaining that to the people who will cost too much to take care of.

This healthcare reform situation is a good example of a situation that demands attention be paid to systems, innovation and leadership. The system is monstrously complex, implementing it won't work without some innovations that no one's thought of yet, and even then the leadership challenge is pretty much guaranteed to be beyond anyone's capabilities. We (Americans) are okay with the idea that the system can't take care of everyone. We are not okay with the idea that the system will officially not be taking care of everyone because they are officially on the wrong side of the cost/benefit analysis.

That being assumed, what sort of leadership approaches have the best chance of getting people to at least let the necessary changes happen, if not get people excited about the changes?

  • "We are working hard for you, but someone/thing else is working against us." Whoever gets saddled with the job of representing healthcare reform can try casting themselves as the plucky, unquestionably-good-hearted hero valiantly struggling against an evil menace. The menace could be immigrants flooding our emergency rooms, greedy HMOs, or just the vast scale of the problem.
  • "We all know more than you and we say this is a good idea/working." Several large stakeholders in the healthcare marketplace, like the pharm-companies and AARP, have already expressed support for healthcare reform. It could be possible to present a unified front that overwhelms any attempt to claim it's a bad idea.
  • "Every alternative is worse, especially doing nothing." Proponents of healthcare reform, like me, have pretty much started here anway. This approach assumes that this will remain the primary tool moving forwards. It could be expanded upon by occasionally adding a new description of just how bad the future will/could get if things aren't done in a particular way.
  • "I was worried, but now I see there's nothing to worry about." Instead of the leaders speaking, they could get average Joes and Janes to speak for them. That way the people who need to be convinced could see people just like them being convinced, rather than Ivy-league, smooth-talking socialist puppets trying to be convincing.

04 August 2009

Leaders Are Framers

Since not much can be said about what leadership actually is, but people talk about leadership a lot, they must be talking about something else. I think they're talking about the incredibly complicated art/science of framing reality.

According to a study by the Hay Group, a global management consultancy, there are 75 key components of employee satisfaction (Lamb, McKee, 2004). They found that:

  • Trust and confidence in top leadership was the single most reliable predictor of employee satisfaction in an organization.
  • Effective communication by leadership in three critical areas was the key to winning organizational trust and confidence:
    1. Helping employees understand the company's overall business strategy.
    2. Helping employees understand how they contribute to achieving key business objectives.
    3. Sharing information with employees on both how the company is doing and how an employee's own division is doing - relative to strategic business objectives.
I think this illustrates THE thing that people in leadership positions have to do. They have to frame reality for everyone else.

There is more in the world than we can possibly perceive, and there is more in what we can perceive than we can possibly focus on, and there is more in what we can focus on than we can possibly make sense of. Appropriately, this scares the bejezus out of us. No matter how well we manage to understand the world, there will always be (infinitely?) more that we do not understand. This awareness leads us all to the obvious conclusion that if we can only understand part of the world, it should be the most important part.

But how can we be sure we understand the most important part of the world if there are things we don't understand? Maybe the most important part is one of those things we missed. This universal doubt drives all of us to the next obvious conclusion; to ask someone else.

Since we are searching for the most important thing, upon which we can focus, we naturally assume it can be known. Therefore it makes perfect sense that someone else could know it. Whether or not they actually do cannot be determined, but they could know it. We look for a framework we can use to understand the most important thing, and from there to build our life around.

Leaders provide that framework for us. They tell us what is important. This is why leadership looks the same at all levels, including personal, because everyone has some idea about what might be important. Leadership is touchy because when an organization tells us what is important we might be grateful, or we might be offended. Or we might be apathetic. It all depends on how we frame reality for ourselves and how our personal framework meshes with the leader's framework.

When the leader's framework contradicts our own one of them has to be rejected, so either we think we are wrong or we think the leader is wrong. When the leader's framework merges with our own we feel completed. It is that feeling of appropriateness that creates a leader-follower dynamic.

01 August 2009

The Consciousness Consensus

There is no consensus regarding what consciousness is, let alone whether or not it can be created artificially. The introduction to Cognition Distributed does an excellent job of walking the reader all the way around the abyss that is our lack of understanding of consciousness.

It takes a $100 book to explain that something can't be explained.

When you say to yourself, "What is seven times nine?" and then "sixty three" pops up, you are certainly conscious of thinking "sixty three." So that's definitely mental; and so is the brain state that corresponds to your thinking "sixty three." But what about the brain state that actually found and delivered "sixty three"? You are certainly not conscious of that, although you were just as conscious while your brain was finding and delivering "sixty three" as while you were breathing, though you don't feel either of those states.

We can agree that consciousness emerges from a sufficiently complex system, but not from insufficiently complex systems. While the metaphysical doubt that a rock could be somehow conscious, or a tree, or Gaia, always remains. . .it is merely a qualification made to preserve intellectual honesty. The doubt is really reserved for things like biomes and planets, not for dust and bushes. It's subjective, sure, but it's the best we've got.

This question has been addressed so often that the language for discussing it is well established. It is possible there are just things that cannot be something, kind of like how "0" and "zero" are things that represent nothing. It's a paradox, not an inconsistency.

Anywho, the really interesting development is that as we offload cognition into artificial actors we are accumulating context for the discussion that was impossible before the microchip. New innovations are being created every day that do things we previously associated only with conscious actors. Since we do not consider these new mechanisms conscious, we can no longer say those functions are conscious. If a function can be provided by purely vegetative processes then consciousness must be something else.

Consciousness is one of those leading-edge concepts because everything we've nailed down as mere complexity, so far, has failed to explain it. Like how the round Earth was just a theory until someone actually managed to sail all the way around it, because the surface that had been explored up to that point didn't fully explain the Earth's roundness. I think we'll figure it out eventually. . .probably a few seconds after SkyNet becomes conscious and tries to kill us all. . .but life's a journey, not a destination.

29 July 2009

Robots Don't Kill People, People do

Rod Furlan twittered a day in the life of a Singularity University student. At 12:08:21 PM he asked, "When a robot kills, who pulled the trigger?"

This question cycles through the public consciousness every year or so and is well illustrated by the South African National Defence Force's 'little' accident with an automated Oerlikon GDF-005 (it sprayed 500 35mm anti-aircraft shells around its firing position, killing 9 and wounding 11).

Oerlikon GDF-005, A.K.A. the T-001

At the moment no one seriously considers (except maybe the Koreans) holding the auto-turret responsible for the killings, because the system that controls the mechanical stuff isn't complicated enough to be plausibly sentient.
...as backed up by empirical research by Friedman and Millett (1997), and by Moon and Nass (1998), humans do attribute responsibility to computers. Of course, that we may be inclined to blame computers does not entail that we are justified in so doing. Although computer systems may clearly be causally responsible for the injuries and deaths that resulted from their flawed operation, it is not so clear that they can be held morally responsible for these injuries or deaths.
However, some people are really excited about the possibility that computers will eventually (sooner rather than later, yay!) be complicated enough for us to blame things on them. Without going into the background on this topic, the basic requirement for something to be responsible for its actions is that it be consciously aware of the difference between right and wrong. Since computers just do what they are programmed to do, and have no ability to understand the concept of "should," they are not responsible for anything. Computers just follow orders.

Computers totally would have let the Nuremberg Defendents off the hook.

The human brain is a system, and a computer is a system, so it is plausible that computer systems can increase in complexity and reach a par with the human brain. So, at some point we will probably have to deal with computers that actually do understand morality. Since we'll still be human, we'll probably give them a gun and tell them to go kill our enemies. However, before we can pull the "the robot did it on its own" card, we will be forced to use old-fashioned computers to kill people.

Dr. Ronald Arkin wrote a book about this, and did a few interviews, and worked on a prototype computer-based morality system. His thesis is that robots can be more moral on the battlefield than humans because they are capable of making fewer mistakes. They won't make decisions based on fear, anger or recklessness and they will evaluate every situation on its own merits instead of suffering from 'scenario fulfillment' and jumping to conclusions.

From a systems standpoint it seems fairly obvious that computers will eventually be more complicated than humans, and at that point they will probably have to start taking responsibility for their own actions (and for cleaning up that pig-stie they call a room). Until then, however, we humans will have to continue taking responsibility for robots that are put in increasingly complicated situations. Dealing with this transition period will require innovations that have not appeared yet. At some point it becomes difficult to hold a person responsible for the actions of a system they own, but that they can't possibly understand fully enough to predict its actions in all situations. Isaac Asimov built part of his career exploring the ways a robot could do totally unexpected things while blindly obeying the 3 Laws of Robotics.

We need an innovative way to interpret who is responsible for the actions autonomous (but unconscious) systems take. Even when some computers truly are unequivocally responsible for their own actions, the vast majority of computers systems will continue to be unconscious. Inevitably, some of the moral computers that we declare responsible for their own actions will assume control of non-moral computers that still aren't responsible for their own actions.

The question is, 'in the future, when a moral computer tells a non-moral computer to kill, who can I sue?'

22 July 2009

President Obama's Healthcare Newsconference

The President addressed the nation. . .or at least as much of the nation as felt like watching the whole thing. The ones who relied on soundbites will miss out on the chance to draw their own conclusions, because anyone who uses soundbites or quotes is trying to back up a predetermined point :-)

He said, "I'm the president, and I think this has to get done." This sort of statement is interpreted as arrogance by people who don't like the speaker, and as authority by people who do. I think the truth is actually somewhere in the middle because the laws of physics actually require a phenomenal concentration of arrogance to stabilize the phenomenal concentration of authority that comes with the Presidency.

What's he got to be smug about anyway?

On the subject of healthcare reform, I think he did a good job of summarizing the reason we should at least talk about it. He said the cost of doing nothing is more than enough reason to do something (cuz the current system is on track to bankrupt the federal government); since we should do something, we should do it right. Doing it right means it doesn't add to the deficit, it protects the middle class and it satisfies healthcare experts. He also said there is so much waste in the current system that we can provide healthcare to everyone; if we can get people insurance that pays for preventative care they won't end up in the emergency room making the rest of us pay for their amputated foot instead of for cheaper counseling on diabetes prevention.

An apple a day keeps our economy afloat for another fiscal year.

The healthcare system is incredibly complicated. That's something that seems to be forgotten when discussing healthcare reform. Additionally, it is a service that cannot be suspended while being overhauled. The average person doesn't even have the language skills necessary to frame the issue, let alone discuss anything approaching a solution. By way of an example, out of the dozens of times pundits mentioned the "cost" of the healthcare reform plan, only a couple times did anyone bother to mention that it was the projected cumulative cost over 10 years, expressed in current dollars.

Even trying to talk about how much it might cost requires several qualifications and each qualification can be further qualified. Thinking about it is tough, let alone expressing it in a sentence. So, instead of admitting how complicated it is, we just gloss over the parts (99.99%) we don't understand and assume there is nothing significant hiding in the fog. It's like when people assumed the ocean floor was flat until they actually got a look at it.

Pictured: Advanced sentence structure.

Anywho, the commentary which followed was even more fun.

  • he didn't add anything new
  • apparently Henry Louis Gates Jr.'s arrest is way more important than national healthcare
I suppose we should forgive CNN. Their Black In America 2: The Revenge of Black In America program was airing next and they really wanted to plug it. Apparently the best way to keep the attention of people who tuned in for a 45 minute lecture on healthcare reform is to claim it was a waste of time and that we should be paying attention to some dude who got arrested and then wasn't charged with anything. CNN is classy that way.

  • he's a great liar
  • nothing is worth doing unless a list of bullet points can fully explain it
Luckily, FOX was busy furiously ignoring the discussion of what happened to that dude who got arrested (oh, was he BLACK, we totally didn't notice) so they had plenty of time to talk about the news conference. Of course, by "talk about" I mean link everything to Republican talking points and, when that was too hard, tell the audience they should be too confused to remember to blink their eyes or wipe the drool off their bib.

  • I don't understand what his plan is (despite the fact that he opened the press conference by saying the plan is still being debated)
  • I don't want the government aggregating rates of medical conditions (despite the fact there is no reason names need to be attached to conditions)
Maybe it's me...but O'Reilly always claims to adore Obama...while always coming up with a reason to hate everything Obama does. In this case he was very clear on two points: that he couldn't understand what Obama was saying and that he went to college so he totally should have been able to. Then he brought in some dude to talk about how healthcare reform is actually really simple, and all the possible changes (all 2 of them) must inevitably lead to a zombie apocalypse.

He'll be standing between you and your healthcare.

  • Tough to make a hard sell for a proposal that's still evolving
  • Republicans don't have an alternative, just objections
I think it's the hair. Anderson Cooper, like Superman, realies on his super-powered hair to save mankind once a week. Just imagine the desperate straits we'd be in if his hair was more like this:

15 July 2009

Definition of Innovate (2 of 3)

Innovate isn't really all that hard to define, but I think rephrasing the common definition will place the emphasis on a more useful concept.

The Online Etymology Dictionary states only that the word originated in 1548 and it is based on latin 'in' (into) + 'novus' (new), so it meant 'to renew or change.'

The Random House Dictionary states: to introduce something new (for or as if for the first time), to make changes in anything established.

The American Heritage Dictionary states: to begin or introduc
e (something new) for or as if for the first time.

The Merriam-Webster Dictionary states: to introduce as, or as if, new.

I think the concept of 'introduction' is important to understanding the definition of innovation. It is common to all these sources because, just like any other introduction, it must be done by a person. All innovations originate with an individual who then "introduces" the rest of us to it. This is because an innovation is relative. You can only be introduced to something once, because after the introduction you are familiar with the introduced. An idea can only be new to you the first time you are exposed to it. From then on it is no longer an innovation in your eyes, though it can still be an innovation to someone who has yet to be introduced to it.

Things are only called innovations until they are integrated into a conceptual framework. After a period of adjustment we consider any new idea to be an established part of the environment, and therefore not new. So it is a label applied by an actor, whether they be an individual or a society, when the actor is first introduced to something new.

For this approach to work we must understand it to mean that everyone is first introduced, including the person doing the introducing. The innovator, then, introduces the innovation to themselves first. This is consistent because the important event is the understanding by each individual that something is new; that is the metaphorical moment of introduction.

Therefore, I propose the following definition of innovate: to understand something to be different from anything understood before.

If you feel like researching the topic further Davit Yost, McKinsey and the World Economic Forum, and businessPOV are some resources.

14 July 2009

Definition of Leadership (3 of 3)

Leadership is pretty difficult to define. A few years back I made a bet that I could produce 10 different legitimate definitions of leadership, but delivered an even dozen without any difficulty. The word "leadership" is searched an average of four million times a month and produces more than one hundred million pages. (For comparison, "American Idol" is searched an average of 14 million times a month and produces two hundred million pages)

The Online Etymology Dictionary doesn't even have an entry on the subject, and has very little to say about "leader." The Random House Dictionary and the Merriam-Webster Dictionary can't seem to define it without using the word "lead."

Congratulations must be given to the American Heritage Dictionary for providing "guidance and direction" instead of just "the act of leading."

There is a particular trend in the introductions of attempts to define leadership, as illustrated by the Business Dictionary, About.com, and even Wikipedia that is best summarized as "no one's really sure but here's what the consensus seems to be." People and organizations are usually careful to state that they are providing their view on leadership, which they might be quite confident in, but which they will not claim is The Definition of the word.

I think that leadership is, quite simply, the act of dealing with change. I think this is The Definition, and that it has been missed, because there isn't much more one can say about it. The general consensus definition of leadership is usually something along the lines of "inspiring a group to action." However, this is almost always qualified with a list of additional actions that should be included, and a caveat that even then the definition is probably incomplete (and even when the definition is complete it shouldn't be taken strictly literally).

Working from that definition, then, it makes sense that it would be misunderstood. Because leadership is dealing with change, unlike management which is dealing with complexity, the act of leading is basically just guesswork. There isn't much more you can say about it. Take what you know about a situation and try to predict the future; you'll be wrong sometimes and right sometimes and hopefully you'll get better. Now, the position labeled "leader" does require an array of skills like management, communication, character, etc because once the guess is made it becomes a mere comlexity challenge, which can be managed. Management can be explained, so that is what gets explained, because the leadership part of it actually takes very little explanation.

I went into more detail in this post.

Definition of System (1 of 3)

The biggest problem encountered when discussing the three concepts systems, innovation, and leadership is that people rarely agree on what the words mean when they are used. To help narrow down the list I will state explicitly that I am using these terms in their general sense and avoiding using them as specific jargon like you would find in a technical medical or computer discussion.

This post is the first in a three-part series. Each instalment will investigate the definition of a word by summarizing the process I went through to generate a useful definition.

The history of the word, as related by the Online Etymology Dictionary, can be traced back to the word 'systema' which is made up of 'syn' (together) + root of 'histanai' (cause to stand); meaning "set of correlated principles, facts, ideas, etc."

The Random House Dictionary has a half dozen (relevant) overlapping definitions of the word. They can be condensed like this: an [ordered/comprehensive/coordinated/formulated/regular] [assemblage/combination/set/body] of [things/parts/members/facts/principles/doctrine/methods/schema] forming a [complex/unitary] [whole/scheme].

The American Heritage Dictionary has fewer overlapping definitions: A [group/organized set] of [interacting/interrelated/interdependent/functionally related/coordinated] [elements/ideas/principles/objects/phenomena] forming a complex [whole/order].

The Merriam-Webster Dictionary has even fewer: A [interacting/interdependent/related] [group/arrangement] of [items/bodies/objects/forces/devices] forming a [unified/harmonious] [whole/network].

If all that could be further condensed down to a single sentence it might look something like this: a system is an integrated group of things which form a whole. However, I think there is an important concept being left out of these definitions.

An important concept to capture is that at any given moment a system is an arbitrary boundary drawn somewhere in a hierarchy of subsystems. A system is simultaneously a system and a subsystem, so defining it in relation to its subsystems makes it a sort of self-referential meta-definition. It's not as simple as nesting dolls. . .

. . .it's more like a fractal.

Therefore, I propose the following definition of 'system': a purposeful choice of scale in an infinitely complex hierarchy of nesting subsystems, the discussion of which involves integrated collections of related things.

For some other discussions of the definition of systems The Univ. of Missouri-St. Louis, the Division on Systemic Change, and the International Society for System Sciences are valuable resources.

11 July 2009

Our "Self" Wants More (and More)

One of the things we humans think sets us apart from (other) animals is that we can invent and use all sorts of nifty tools. While research has demonstrated that animals can use natural tools, and even artificial tools, there is still a dramatic difference in scale (in tool use) between humans and our closest competitor.

Here we can observe an animal using a tool to extract money from a tourist.

So, for the moment, lets assume that the essence of what we are is something very specific, like genes or a soul (call it the "self"), and everything else is a tool for advancing the "self's" agenda. In this thought-experiment, then, our body is just a tool for interacting with the world and our brain is just a tool for thinking about interacting with the world.

Our body, when thought of as a tool, can be described as having certain parameters. It is a certain size, uses a certain amount of energy, produces a certain amount of force, etc. The brain can also be thought of as using a certain amount of energy, providing a certain number of calculations at a certain speed, etc. So, if our "self" became aware of the possibility of gaining access to a broader range of capabilities than our brain and body naturally provide, why wouldn't it?

This process would appear to be a gradual improvement in the options our "self" has; specifically a better body and a better brain to control it. However, the brain and body can only be improved so much. For our "self" to keep getting more options it has to start incorporating things found outside the body. These things, like the wheel, a sharp stick, and fire, are just extensions of the body. Deer happened to be born with sharp sticks on their heads, we had to invent them, same capability.

Some of our newer inventions, like writting, GPS, and the internet are extensions of our brains. Rather than expanding mechanical capabilities they expand processing capabilities. We could spend a long time trying to puzzle through the problem of navigating to our destination, or we could build a circuit to do that thinking for us just like a GPS unit does. Pulley systems allow our body to do more work than before and personal computers allow our brain to do more thinking than before.

In this sense we started "merging" with machines a long time ago, when we started using spears. The process accelerated when we invented books, and is beginning to progress wildly faster than before due to little things like the Green Revolution and the Internet.

I don't know what we'll be able to do in the future, I just know that it will be more than we can do now.

EDIT (2009AUG1) Cognition Distributed: How cognitive technology extends our minds mentions in the introduction that: "Cognitive technology does, however, extend the scope and power of cognition, exactly as sensory and motor technology extends the scope and power of the bodily senses and movement...Both sensorimotor technology and cognitive technology extend our bodys' and brains' performance capabilities...as we increase our use and reliance on cognitive technologies, they effect and modify how we cognize, how we do things and what we do. Just as motor technology extended our physical ability and modified our physical life, cognitive technology extends our cognitive ability and modifies our mental life."

18 May 2009

The Improbable Makes Perfect Sense

The thing about systems that make them difficult to understand is that they obey rules which make sense, but the outcome can often make no sense whatsoever. By way of an example, I present this article.

Basically, a woman handed a man her baby to use as a Taser shield. Yeah.

Babies: It's about time they were useful for something.

I was going to try to build some sort of lesson around this example, but I think it pretty much stands on its own. If you think something would never happen because it wouldn't make any sense, then it will happen. . .and when it does it won't make any sense.

08 May 2009

Systems Discussions Required Specificity.

Rachel Lehmann-Haupt wrote in Newsweek 11-18 May 2009: "Egg freezing, I believe, could be as revolutionary as the birth-control pill. And the timing for its takeoff couldn't be better. The age of first-time motherhood is rising. In the United States, the number of women becoming pregnant between the ages of 35 and 44 has nearly doubled since 1980. As education, advanced degrees and higher salaries become priorities, we are trading in our years of procreative power to gain economic power."

I tracked down the statistic the author references (thank you eppc.org for actually referencing the stat) in a press release from the National Center for Health Statistics. The press release states, "The birth rate for women aged 40-44 years has more than doubled since 1981." It also states that the birth rate for that age group in 2003 was 8.7 births per 1000 women. I had to dig through their archives to find the exact birth rate for 1981 (it was 4.0 for all women 40 and over).

The data does indicate a steady increase in the birth rate for women over 40. However, when the birth rate for all the other age categories are included in the plot, the tend is. . .underwhelming.

As you can see, the statistic is entirely accurate, but it is also completely out of context. Until the birth rate of the 40+ age category doubles from 8 to 16, and then again to 32, it will still be a relatively insignificant event. Mentioning that it doubled is like mentioning that America is using more wind power. Yeah, we are, but even "twice as much" still doesn't matter. In fact, the most significant increase seems to be in the 30-40 age range, not in the 40+ age range.

After looking at the data (in what is admittedly a quick and dirty fashion) it appears the author's main point seems to stand. Overall the birth rate seems to be decreasing and, at the same time, the birth rates for older women have been increasing. From a systems perspective, this is an interesting trend. What factors could be driving this change? Is it "education, advanced degrees and higher salaries" as the author states?

One of the things to keep in mind when looking at a system is the difference between correlation and causation. Just because two metrics change the same way at the same time does not mean one is affecting the other. For example, an increase in average global temperature coincided with a decrease in average global pirate attacks. That does not mean that pirates are allergic to heat, and it does not even mean they are both responding to a change in some third factor, it just means that they both changed.

I didn't write this to get into the debate. I only intended it as a small case study to illustrate a point. Statistics should be very carefully applied to systems. Statistical analysis is a great tool for reductionism, but it is less useful for analysis of holistic systems. This is due to the fact that statistics, by necessity, can only be used when things are reduced to specific metrics. It is tempting to analyze every subsystem and think that you understand the system, but to understand the system you have to analyze the system itself, not the subsystems. Doing this properly requires a lot of careful definitions of exactly what the system you are studying consists of, which are pretty boring, so that step usually gets skipped (or at least not mentioned).

When you want to talk about a system, but you skip the definition step, what you say probably doesn't matter. At the very least, it is open to a lot of misinterpretation. For example, in the case I cite here the author could have been more specific about what the stat(s) were actually describing, how they were obtained, etc.

Opportunity Recognition

I think that the key to recognizing opportunities is to do a lot of thinking beforehand.

When we observe a system we have a chance to learn what the underlying principles of that system are. The principles, of course, are much more important to understand than any particular manifestation of them. An actual event, or series of events, is just a manifestation of the underlying principles. If we imagine that the event is important, rather than the principles which dictated it, we will never be better than one step behind. The principles allow us to predict what will happen based on what is happening and what has happened; to project into the future.

We don't know where it will go, but we do know where it won't go.

Once we can predict what will happen with better than 50% accuracy it becomes a useful tool. What we do with that tool is stick it into our imagination, along with everything we know about the past and present, and churn the whole thing up for a while.

I can't take credit for this particular idea. But Blenderhead can.

If done properly, what will pop out are plausible predictions of future events. We can then test the plausibility of these predictions by waiting to see if they actually happen. When they happen as we predicted, we win a Nobel Prize. When things do not go the way we predicted, we reevaluate the ingredients in the mind-blender and then set it to frappe. Eventually we will get an accurate enough understanding of the principles, and the salient aspects of the past and present, that our predictions will be relatively reliable.

AFTER that, we are ready for opportunity recognition. The potential to recognize opportunities only emerges when we start to think about the way things could work out if a change was made at the correct time or place. It is the next step beyond simply predicting the way things will work out if everything continues as it has. First we have to predict what would be useful in a situation that doesn't exist yet, and then we have to file that prediction away.

Opportunity recognition finally occurs when we run across a situation that closely conforms to those predictions we made. If we thought previously that a certain thing would be possible if this, that and the-other-thing were present, and then we see something that offers this, that and the-other-thing, we can recognize it as a chance to make something better happen. However, it requires a lot of forethought. That is the key. Effort has to be put into figuring out what would be useful, but that first requires that we figure out what will happen, and to do that we first have to figure out the principles that control the situation.

03 May 2009

What Leadership and Management Really Are

Everyone has their own way of defining leadership. Here's mine.

I think that leadership, as a concept, should be compared to management. Management is performed whenever someone deals with complexity and leadership is performed whenever someone deals with change. They can be performed at the same time, and usually are in some proportion, but this distinction makes them easier to think about. They are not concepts which can be separated; they are two sides of the same coin.

Okay, so technically coins have three sides. I'm not changing the metaphor.

If we think of actions like they were all mixed up together and baked into a brownie, then we will have a very strange mental image. However, we will also have a useful image. The center, the majority, of the brownie is the same soft consistency. This majority would represent how most of the things we do are only in response to complexity. The edges of the brownie, which are harder and chewier, are the minority of actions which are in response to change. It's not such a metaphorical leap to imagine the inside of a brownie as all being the same and the edge of a brownie being the part that has to "deal" with the change from brownie to non-brownie, which is why it's a little bit different from the rest of the brownie raw materiel.

Pictured: The weirdest metaphor you've seen today.

The proportion of soft brownie to chewy brownie varies from situation to situation. Some jobs are characterized as "Leadership" jobs because one tends to either seek out or be confronted with more change than usual. This idea could be represented like so:

Pictured: The most delicious leadership lesson ever.

The reason this metaphor is useful is that it illustrates a principle of the relationship between management and leadership. They really cannot be found in isolation from each other. You can't bake a brownie that has no edges and you can't bake an edge that has no brownie. Additionally, the center and the edge of a brownie are made of the same stuff (actions in case the metaphor is still too vague). We tend to label a job as either management or leadership because of the relative proportion of edge to area. So, a marble would be an example of a "management" job because it maximizes management decisions and minimizes leadership decisions.

As little surface area as possible.

This could be compared to a radiator which maximizes surface area vs volume, just like a "leadership" job would maximize leadership decisions and minimize management decisions.

As much surface area as possible.

So, when people try to describe leadership as fundamentally different from anything else they are forgetting that leadership is just the edge of the brownie that ran interference between the brownie and the outside world. It is still the same stuff, it has only acquired a different consistency due to its being exposed to change. Management and leadership always exist together, but the proportion sometimes changes.

Management, then, is when someone is dealing with something that is well understood. We like to keep things the same for as long as possible because we need to be able to rely on something. That means that after a while the thing that has always been the same is so well understood that every problem has a documented solution. A person only needs access to this accumulated knowledge and they can maintain the status quo indefinitely. Leadership, however, is when someone is on the leading edge dealing with change. To maintain an area in which things do not change, someone has to be out on the edge figuring out how to deal with the inevitable new problems.

There is no documentation which allows someone to deal with a new situation because nothing can be documented until it has first been dealt with. So, leaders have to basically guess the future, which is why leadership is so hard to understand. People keep trying to approach leadership the same way they approach management, as a complexity problem instead of a change problem. Another way to think about it is that there is a fundamental difference between dealing with something you can see and something you can't.

Complicated, but at least you know what you're dealing with.

That could be anything. We should form a committee to discuss it.

The latter picture, of the shadow, is what leaders are expected to focus their attention on. They are supposed to learn how to interpret things with no context and no support. Is it any wonder people are so confused about what they're supposed to do? On the one hand leaders are supposed to inspire confidence. On the other hand their job is basically voodoo, and their success is largely dependent on luck, which does not inspire confidence.

This brings me to a key point, which is what leadership is not. Leadership is not management. A manager who delicately applies some creativity to deal with some workplace drama exercised leadership. A leader who carefully practiced his speech and incited the masses to follow him exercised management. The methods for inspiring people to do things is relatively well documented and is becoming more so every day. Therefore, inspiring people should be characterized by the preponderance of management inherent in the action. Leadership, on the other hand, is dealing with a situation without any supporting information. It is inherently impossible to define HOW to 'do' leadership, then, because every situation is different.

29 April 2009

Music emerges from complexity

This is a good example of what emerges from sufficiently complex systems.

As a system becomes more complicated, new properties begin to emerge.
"The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe..The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. At each level of complexity entirely new properties appear. Psychology is not applied biology, nor is biology applied chemistry. We can now see that the whole becomes not merely more, but very different from the sum of its parts."

It's not so much that no one, if they'd sat down and thought about it, could have predicted that people would make music with old computer parts. People make music with everything they can get their hands on. It is that computer parts would not have existed without a certain level of system complexity; they simply require too much infrastructure to be available at a lower level of complexity.

There is no inherent difference between a person turning an old trash can into an instrument and a person turning an old computer into an instrument. However, one of them only requires that trash cans exist, while the other requires that computers exist.

Cavemen lost in a modern world, or international percussion sensation?

We will always be able to hit something, or pluck something, and get a noise out of it. With a certain amount of experimentation and elbow grease we could figure out how to get a musical scale out of just about anything. However, new tools require us to learn new techniques. Notice that no one was banging on the old computer parts. The concept of "hit it until it makes a sound you like" is not new, and might even be buried somewhere in our genetic code. Taking over an old computer system's drive unit and figuring out how much juice to give it, and for how long, and in what combination, to get a particular sound requires an entirely new thought process.

Not everyone likes every instrument. If you happen to want to play an instrument that does not exist you have two options: invent it or wait for someone else to invent it. To invent an instrument requires knowledge, which means that someone might learn something new just so that they can invent (or play) an instrument.

Once these new skills and ways of thinking are out there they can be applied to areas where they might not have been generated spontaneously and they can even inspire brand new ideas. Thus increasing the complexity of the system even more and allowing for even more emergence.

27 April 2009

Robots are better in conception than reality

Few topics inspire as much excitement (among a particular crowd) as robots and, more specifically, the future of robotics. I have to admit a particular fascination with this subject, partly because robots are pretty cool, but mostly because the subject covers such a wide range of possibilities. Will robots become our overlords? Will robots consume the Earth and everything on it as raw materials? Will robots and humans live together in harmony forever? Will robots become grim killing machines with little subtlety and less emotion? Will they always be a disappointment?

Mike Treder writes, "But what if, instead, the recursively improving computer brains of robot warriors allow them to become enlightened and to see the horror of warfare for what it is -- to recognize the ridiculousness of building more and better (and more costly) machines only to command them to destroy each other? What if they gave a robot war and nobody came?"

In contrast, Jordan Pollack writes, "Most people's expectations of robots are driven by fantasy [...] We have to master either software engineering or self-organization before our most intelligent designers can dare play in the same league as Mother Nature [...] In case you missed them, today's most popular robots are ATMs and computer printers."

The robot apocalypse will be underwhelming.

They both bring up good points, but Pollack's is much more in line with reality. Robots inspire such heights of fancy that one cannot distinguish the history of the concept from the history of people jumping to some absurd conclusions. For example, South Korea is officially writing up a code of ethics to protect robots from human abuse. So, they've jumped right past questions of whether or not robots can even be abused and are making sure they are protected just in case. This is an interesting thing for a country's government to be throwing their resources and prestige behind considering there are plenty of people in their own country, a certain nearby country, and all over the world who are being abused by people right now. It indicates that 1) the level of absurdity of the things politicians do in South Korea is equal to the level of absurdity of the things politicians do in America or 2) they are more worried about hurting the hypothetical feelings of robots than the actual feelings of humans.

Cardiologist robot is procrastinating.

This sort of thing has been going on for a long time in one form or another. Right here in the States non-professionals are arguing over whether or not robots should be held responsible for their actions, while the military itself is dragging its feet over even defining the requirements. (if the requirements are not defined then no progress can be made)

People have always been worried that "the military" will jump feet first into autonomous killing machines, laughing manically the entire time and possibly dying ironically when their monstrous creation turns on them. The reality is that military professionals take their job very seriously and are incredibly reticent to give a machine the authority to do anything at all. Anyone who has had to deal with the results of lowest-bid government contracting is reluctant to trust anything built by a government contractor. Giving it a gun and orders to "Git 'em!" is the last thing any professional will do.

Pictured: Unlikely.

Additionally, why would the military ever embrace autonomous soldiers? If they ever worked correctly they would simply take the soldiers' jobs. The only incentive the military has for incorporating robots is to expand into capabilities they did not have before (reconnaissance) and to protect their own soldiers lives while increasing the danger to the enemy's soldiers. And what do we see? That the thousands of robots being used right now are gathering information and defusing bombs. That is the reality of the situation. As communication links improve you might see teleoperated fighting robots, but I do not think autonomous soldiers will ever be embraced by the military.

Innovation Never Decreases

The great thing about innovation is that it builds on itself. The problem is that people always seem to think it doesn't.

It is not so much that every time a new innovation appears, especially a disruptive one, people publicly claim that nothing will ever top it. It is that people seem to instinctively think that a great new idea will stop more things from happening than it will initiate the beginning of. When nuclear weapons were invented people claimed it would end warfare.

The reality is that new ideas only lead to more new ideas. Yes, innovations often make something else obsolete, but they never render it completely unnecessary. By way of example, the enemy cannot launch a nuke, if you disable their hand.

Yes, a nuke is crazy effective at what it does, but it does not do everything. The invention of nuclear weapons did not render anything which had been invented before unnecessary (ED: like knives). You can see this same pattern in any other area of innovation. The invention of transistors did not render vacuum tubes unnecessary. The invention of graphic design did not render painting unnecessary.

All an innovation does is expand on what came before. It gives us new options. Because an innovation creates new options, without completely invalidating the previous options, it expands the collective number of options. This means that there are now more things to be innovated upon, which means more innovations, which means the number of options expands exponentially. It is equivalent to the expansion of the area of a circle (ED: quadratic) as the circumference increases.

Innovation is a process that makes itself more likely. Any new innovation will only provide a new possibility; it will not negate the possibilities that came before.

An Uncommon View on MidEast Oil

People tend to assume that America is too dependent on the MidEast for oil. This is also assumed to be a bad thing because American policies often conflict with MidEast policies (imagine it's an homogeneous block for a moment) on pretty much every point. Things like "being there" and "not leaving" for example.

The general train of thought is that America should not be sending money to "our enemies" in the MidEast because they just use it to undermine our power, destroy our alliances, and even attack our homeland. This is an especially powerful argument because, well, we do send them money and they do use it as a resource to fight us. However, taking the step of declaring that to be such a bad thing that we should no longer buy oil from the MidEast is, in my opinion, unjustifiable.

The DIME (diplomacy, information, military, economic) is an old mnemonic which helps one to think about situations like this. Any conflict involves, or at least could involve, these four factors. For example, America has very little understanding of MidEast language and culture (diplomacy). America also suffers from a significant lack of good intelligence regarding the goings on of the area (information). What we do have is a strong military presece and relatively strong economic ties.

The thing is that economic ties go both ways. There would be no trade without two parties and two different items of value. The MidEast has too much oil and not enough money; America has too much money and not enough oil. So we trade, and we both get something we want. When people complain about the MidEast using the money it gets from us to fund projects contrary to our interests they always fail to mention that we use the oil we get from them to fuel projects contrary to their interests. That is the nature of competition. Two entities finding all of their interests aligned is. . .unusual.

If we stopped buying oil from the MidEast, leaving aside for a moment the question of where we would make up the difference, what would happen? I propose that the MidEast would sell it to all the other countries which are interested in using more oil. Additionally, with the United States leaving the market the price of oil would drop, which would mean the numerous less-wealthy countries would be able to buy up what was available. Yes, it's a simplification, but it is not the important part. With the US and the MidEast no longer economically involved with each other at all, there would be very little incentive for the two to get along. At the moment we depend on each other (to a certain extent), so there is an incentive to keep things from spiraling out of control. But that incentive is primarily economic.

This is similar to the situation America is in with regards to China. The two are so closely intertwined economically that they cannot afford to disagree too severely.

I propose that the solution to the situation is exactly the opposite of the common "wisdom." The US should become more closely intertwined with the MidEast so that we gain even more control over their actions. Isolation from the MidEast will only lead to a situation in which they really do not care what they do to us, because they are not dependent on us in any way. At the moment we buy their oil, thus proping up their authoritarian governments, which in turn keep the population somewhat in check. If their governments were free of our influence, and yet still in need of a scapegoat for the misguided anger of their blood-feuding populations, they could easily encourage acts contrary to American interests that they would have discouraged previously. Since this is exactly what we do not want, we should not cut the one significant tie we have with the MidEast, and in fact we should strengthen it.

The News Needs to be Free

The problem with the news is that it is a business.

Any corporate entity which exists to turn a profit will, by necessity, make the pursuit of profit its highest priority. It must, otherwise it will disappear. Its very existence depends on an ability to perpetually take in more money than it expends. By way of a reality check, this is analogous to every 'living' thing our science is aware of. If something needs more resources than it gets, it starves to death.

The vast majority of news organizations depend on advertising dollars for their existence. Advertisers choose where to send their money, and how much to send, based on how many people 'visit' that place and how likely they are to pay attention to the ads found there. This means that news organizations maximize their profit by attracting as much ad money as they can with as small an expenditure of resources as possible. By way of a reality check, this is a description of a business model.

Therefore, the incentives are all wrong. The organizations investigating and reporting the news do not care about the quality of their product. They care about the ratio between how much it cost them to attract eyeballs and how much they can charge advertisers for those eyeballs. The quality of their product is related to that ratio, but only indirectly, and there are many competing variables.

If we, as the public, want to receive the best news possible we will have to take steps to institutionalize the proper incentives. In my opinion, the proper incentive is one which puts the quality of the news at the absolute top of the goal hierarchy. We want news organizations to think first about producing high quality news. This would seem to require that they be freed from worrying about the existence of their organization, since any entity which worries about its own existence will do so first. The only exception is when an entity selflessly decides to sacrifice its existence for something more important. However, this would be effectively useless since it would dissolve the organization we wanted to exist to provide us with news.

One way to accomplish this is to establish a source of money, like an endowment, which supports the news organization. In this way it would be independent of outside interests like advertisers and would be able to focus on properly reporting the news. It is possible (likely?) that there would be less of an incentive to work hard since the money is guaranteed, but this could be mitigated by a board of directors who would carefully choose a CEO for the organization and hold them to high standards.

A news organization like this would be able to provide context for events. When a plane crashes, they could afford to sacrifice space to boring facts like how thousands of planes landed safely at the exact same moment, and how the average person is still safer just sitting in an airplane than anywhere else. They could report ALL the details of a "police brutality" story; like how that poor, mentally disabled man managed to fight off a half dozen officers, and almost got a gun out of its holster, before they decided to taser him. This hypothetical organization could put the events into context, and could report when someone is lying by digging up their own words or actions previously reported (just as one example). The average person could trust this source of information because it would be free from selfish influences.