My year without song

Austin singing dude

 

Almost one year ago, exactly, I stopped listening to music. The deal was, my mother was dying, a beautiful woman named Janet, illuminated with cancer after 80-plus years and friends and knitting groups and hope and silver bracelets and then she died. As likely you have experienced, if you are young, an Uncle Doofus dies, if old, a Dearly Beloved, the woman of your dreams, and here I was in the middle. I’d been through death before, a grandfather in far-off Montana passing away with the news relayed via phone causing a young boy who loved his stories and boxing moves and lamb chops on earlier sweet summer vacations to run through a Vermont field amid overgrown grass and dirt holes caused by groundhogs crying and screaming in shame-anger to the more-recent father passing, after a visit to the old house and long drive back to work amid clouds glowing orange in the sky with a faint scent of hope, purple lines on the New Hampshire horizon lying that hope exists, the phone saying what any son can’t comprehend at first, hope is dead, your Dad is gone. But a mother, a Mom, when that time comes, for you, that will be different.

What was different was I was there at the instant of Her death. Her wizened face drew ever more taunt, the morphine drip a bliss from hell, as she grew thinner and thinner until with one last intake she did not exhale, and I caught the moment as nurses swirled around puttering and no exhale came except one tear, one shining drop of liquid saline, drifted from her left eye down her cheek and I watched almost in admiration at this signal that no future breath was coming. And she was gone. As luck would have it, a few of her friends walked into the nursing-home room at that second to check in, and torn between universe-rending bereavement and neighborly politeness I looked up and said, “Hey, I think my Mom just died, can you give me a second?”

So I stopped listening to music. The career thing was tough, I’d had a few bumps at work, the thrust-and-parry of an agency and internal politics and egos vs. egos had had its day, and suddenly the woman who carried me into the world was missing.

So I stopped listening to music.

Until now.

In this past month, I calmed down. I’ve been reading a bit on Buddhism, not to share any literature shit with you or question your Christianity but simply to study the philosophy of “letting go,” the idea that stress is hot coals held tight burning in your hand so drop that shit down, and watching the brilliant Dan Lyons/Fake Steve Jobs recover from downturns and reading Marcus Aurelius think deep thoughts on kicking ass in Europe and suddenly realized I should try to inspire those around me, officemates and agency partners and clients and my sons, dear boys, to reach their potential. So I found the old iPod, a blue widget so small I almost lost it in the laundry, plugged it in, and recovered the playlists a slightly younger Ben once listened to and realized the … music was … good.

So last week, on a sunny day, I packed my old workout gear and at lunch stepped outside the brick machine to jump rope for 30 minutes in an early spring workout listening to songs that had not pumped me up for more than a year.

The rope swung. I was stuck inside the vortex. The clouds passed by and I went into that bliss zone of pumping peace. OK, after 10 or 15 minutes, I felt like vomiting, but I still swung the damn rope, to the beat, drifting into the state of sound that I once knew as a young man while fitter. Not sure how that one made the playlist. Wait, that’s better.

I had recovered the sound of song.

Sorry to have lost you.

Invasion of the marketing robots

Apple handrail BW

 

I work in advertising, in a mysterious region called “media planning” where we have an even smaller island in the shallows labeled “programmatic.” If you’re not familiar with how advertising works, just imagine that our media team focuses on the mathematical predictions of what could work in communications campaigns designed to influence people to buy things … and we are starting to use automated software systems, called programmatic, to assist us in targeting ads.

So I’m really interested, as you may be, in whether robots are coming for our jobs.

Earlier this month on a stage in Fort Lauderdale, Florida, Jon Iwata walked over to a small white robot. Iwata is senior vice president of marketing and communications for IBM, and he’s spent the past few months conducting a road tour promoting IBM’s new global brand slogan, “The Era of Cognitive Business.” The cute plasticky robot was tied into Watson, IBM’s artificial intelligence experiment – powering everything from Under Armour workout wearables to weather predictions. Watson, famous for winning a Jeopardy match in 2011, is at core a computer that can answer questions by processing vast amounts of data. But Watson has two other levels for IBM: Outside, it’s a humanized brand face for a vast technological monolith, and inside it is really an ecosystem of machine learning. Iwata and the robot explained how Watson could help with almost everything humans do, from research to healthcare to, well, advertising. Watson, it turns out, is not a singular robot that is learning, but a vast series of knowledge pools each of which could be siphoned off to perform a specific mission.

“It will be inevitable that artificial intelligence, or digital intelligence, will be embedded and integrated into all things digital,” Iwata said. “Why? Data is exploding today, and most of the data is unstructured.” The volume of data in healthcare, government, utility and media doubles annually. The Weather Channel, for instance, gathers more than 3 billion data points from weather stations to build forecasts. Daily, our human species now produces 2.5 billion gigabytes of data, enough each week to build a stack of thumb drives holding 1 gigabyte each from the Earth’s surface to the moon.

Machines are learning to manage this complexity, finding patterns that lead to insights that in turn push controls. Fly in a modern airplane, and the pilot assists in the takeoff and landing, with most other actions completely automated by algorithms.

So it’s only natural that artificial intelligence would encroach on the ad industry. The art of influencing consumers or business partners to take action is moving rapidly into science. Someone at the end of Iwata’s talk posed the question, via tweet as people do at conferences nowadays, had IBM ever deployed Watson to run a digital ad campaign? “Sure,” Iwata said cheerfully. “Our team tested Watson running programmatic digital, and the results went up 2x over anything we’ve seen before.”

 

Marketers can predict what you’ll do next (say, catch the flu)

Advertising has always been based on data — marketers at core want to place the message about their supposed value against a human brain that can only be found by some form of data targeting — but the vague concentric circles of targeting have tightened from demographics to individuals to psychological prediction. Old qualitative systems (focus groups, radio ratings panels) and quantitative systems (Portable People Meters that accurately monitor radio tune-in signals among volunteers) are migrating to huge sensor-based systems that pick up individual motion, for extrapolation to what you’ll do next.

Sensors like, say, the ones in your phone.

For instance, algorithms can now suggest where you should go, or not go, to avoid catching the flu, based on mapping smartphones around you. Google for years has mined search data for flu-outbreak patterns faster than reporting from the U.S. Centers for Disease Control, but Alex Pentland, creator of the MIT Media Lab, has gone even further in finding clusters of people who are about to come down with fevers. Pentland uses so-called “reality mining” to evaluate signals from the smartphones people now carry in their pockets. He’s built algorithms that not only can tell who has the flu, by iPhones breaking their daily commutes back-and-forth to work, but which people are just getting ill by sensing the common variances in travel and communications we all make when we start to feel ill. A few days before you get really sick, you make uncharacteristic changes in behavior; people make more calls in the evening to friends or family, seeking an unconscious consolation, as they fall under the weather. By picking up and modeling locations of phones with these early flu signals, Pentland can build maps showing which movie theaters on a weekend night should best be avoided — because more people there are about to get ill.

Data predictions are moving far beyond traffic alerts to forecasting nuances of human desire, health, and behavior. A few years ago, Target sent coupons promoting maternity wear to a Minneapolis household when it picked up signals, from shopping behavior, that a woman who lived there was pregnant. The woman was a high school girl, and her father didn’t know she was pregnant yet.

 

Robots writing ad creative

While many in the ad industry have boxed this robotic targeting-and-prediction trend into “programmatic,” thinking it just applies to digital banner ads or online video, the reality-mining bleeds into creative, too. Algorithms aren’t just for digital breakfast any more.

Consider the company Automated Insights, which turns datasets into nearly perfect prose. The AP uses it to write more than 20,000 news and sports stories every year, and companies from Allstate to Samsung deploy it for automated business writing. Here’s a real example:

“Alyssa,

You started this month with $1,800,000 in total pipeline. You have $900,000 in closed/won revenue against your 2015 annual quota of $1,000,000, and this is 150% of what you closed by this same time last year. Damn well done!…”

Play this auto-content out, and the noblest of advertising human innovation, creative for television ads, could soon be automated. In 1978, Donald Gunn, creative director for Leo Burnett, took a year sabbatical to study patterns in television advertising. He formulated that all TV ad creative falls into 12 master formats. There is the product demo ad (HeadOn), the contrast-with-competition ad (Audi vs. BMW), associated user imagery (Justin Bieber relays his cool persona to Calvin Klein), and only nine others. Humans in advertising hold The Big Idea sacred, but computers that can automatically write AP stories surely are not far away from algebraically thinking up a funny Super Bowl spot based on core formulas.

(Male) Actor 1 with (product) (stumbles). (Attractive female) Actor 2 (responds) (sexual tension). (Barrier) arises, then (product) solves (barrier) with (unexpected outcome*). (* must match template for human humor.)

Amy Webb, head of the future-forecasting firm Webbmedia Group, has suggested marketers are one of eight jobs that could be replaced by robotic systems in the next 20 years (along with cashiers, finance managers, journalists, and hell yeah, lawyers). In advertising, algorithms could pull in data on consumer habits, desires, and media trends; parse ad creative for what will work best; auto-generate content; select the media; measure results and optimize to best performance. Six levers. Done.

 

When humans win

However, the history of AI shows the race to replicate human strategy is not a quick one. In digital advertising, many systems in the past years have promised to use automated algorithms to target ads against the right people. AppNexus and The Trade Desk are two examples of systems that learn over time; punch in target data segments, budget, and the campaign goal, and as the advertising runs out over time the bidding system measures what is performing in driving clicks or conversions to a web form, and dials in the variables.

This approach is spreading in other forms of advertising, including online video and television, as the fragmentation of media channels continues and the variables grow more complex. Marketers no longer live in a world where a handful of TV networks can reach most consumers. Instead of targeting “media” to reach a group of people, marketers must target all of the millions of individuals, each aspect of the audience itself. The audience is a kaleidoscope of demographics, psyches, needs, and behaviors. Target groups cluster and break apart to reform, like starlings in murmuration. Automation and software are required to manage the complexity.

Artificial intelligence in marketing today works best when consumers are in a peak state of interest, matched to a vast set of product solutions. The Google search window, Amazon.com product recommendations, and Netflix movies are all examples of a hungry consumer nudged against a near-infinite supply of options. Personalization works best with plethora.

But absent urgent need matched to huge product inventory, automated ad systems often fail. The “funnel” of logic to hone an ad campaign where most consumers are only moderately interested in a specific product can’t match human guidance, because, somewhat counter-intuitively, humans move faster in both ideation and optimization. Our impulsiveness and aggression, ingrained in us by ancestors who had to fight or flee roaring tigers, allow for rapid moves that algorithms, building data over time, are reluctant to make.

People are more likely to realize when a prior assumption is off course. Tell a computer to take a nice walk in the woods, and it will walk. Tell a person who hears a twig snap, and he’ll adjust course anticipating a bear.

A smart human, for instance, would see a pattern of digital ad click-through rates (the percent of people who are served a banner ad who then click on it) averaging in the 0.08% range and pick up a 0.30% outlier looks suspicious, then dig in further to explore fraud. A technology algorithm designed only to optimize to a target of a high response rate would instead push more marketing funds into that 0.30% high performer, rewarding the fraud. Recognizing the power of humans to guide AI, some automated digital companies such as RocketFuel have moved away from their original “black box” algorithm system to a more-open interface, where humans can evaluate and revise the media targeting strategies. Others, such as LinkedIn, have failed in bids to use data on their users to guide automated targeting across the Internet.

 

When robots fail

Oxford University professor Nick Bostrom predicts in “Superintelligence: Paths, Dangers, Strategies” that artificial intelligent systems will within 50 years outpace human reasoning. But he also worries that algorithms can go astray, creating huge risks, even if they eventually grow smarter than us. The “Flash Crash” stock market collapse of 2010 was one example, where automated algorithms between a mutual fund complex and high-frequency traders began feeding off each other’s erroneous signals, selling off S&P 500 futures contracts in a cascade that wiped out a trillion dollars of value before circuit-breakers kicked in to stop trading.

“These events illustrate several useful lessons…” Bostrom writes. “Smart professionals might give an instruction to a program based on a sensible-seeming and normally sound assumption … and that this can produce catastrophic results when the program continues to act on the instruction with iron-clad logical consistency even in the unanticipated situation where the assumption turns out to be invalid.

“The algorithm just does what it does, and unless it is a very special kind of algorithm, it does not care that we clasp our heads and gasp in dumbstruck horror at the absurd inappropriateness of its actions.”

Imagine an AI designed to run a paperclip factor, he suggests, with instructions to maximize the production of paperclips, that somehow escapes its computer box to control the world, mine the entire planet, and turn our environment into a massive heap of metal parts. Extreme and silly, perhaps, but robots only do what we tell the robots to do.

For this reason, humans are still needed in advertising, from The Big Idea to media planning to digital system management to analytics. Checking my reasoning, I asked a smart colleague at Mediassociates, Nate Carter, if he thought robots would control our future. “No,” Nate said, “because it’s all about aggression. If I’m a person running a campaign I can go in, turn off an advertising source, and make drastic changes over a short period of time, understanding the ramifications of each. If an AI shut things off and turned them back on, you would see bad results, you won’t know why, and you might have to shut down the process.”

Computers can win at chess today, but chessboards have only 64 squares. In a world of millions of marketing variables, humans are still needed to search for patterns or make guesses when we can’t find one. We’re winning for now, perhaps because instead of focusing on one smart goal, we’re not afraid to try many paths that might be stupid.

Yet … Bostrom is right. Solving chess once seemed an impossible challenge for computers, for it would mean matching the perceived height of human intellectual conflict. Today your Mac has a chess program that can easily beat you. Perhaps solving all of advertising someday won’t be too difficult for Watson at all.

 

Modeling the human projection of bullshit


Austin Stag man

 

An online friend of mine, Josh Bernoff, recently left Forrester Research, where he led media analysis and coined the brilliant term “Splinternet” to describe our fragmenting communication networks. Looking for something new, Bernoff started a blog critiquing the rising tide of bullshit in human communications — everything from Dilbertesque business jargon to the lies of politicians. His style is a mixture of E.B. White grammar guide, Mark Twain misanthropy, and Warren Buffett business clarity. WTF?, a typical intro goes, followed by instruction on how the guilty party could have avoided BS.

At one point, Bernoff asked if I’d do a panel with him, and I began noodling on a model explaining the Creationist Forces of Bullshit. For instance, Josh is not really my friend — I’m falling into the same trap of bullshit in my lede above, exaggerating a minor connection for more persuasive content. So why does everyone, including me, craft levels of BS? My suggested framework for bullshit has three vectors:

  1. Level of confusion — is the author stupid or clear-headed?
  2. Level of misdirection — is the author deliberately trying to mislead?
  3. Level of bias — is the author starting from a point of prejudice?

These three layers explain everything you need to know about bullshit, and why today we have more BS than at any prior point in human history. Let’s explore.

Confusion: First, the level of confusion in communications is up, simply because today we have vast inventory of media that sucks in mediocrity. Journalism used to be a carefully studied specialty, in which editors hired writers after arduous training in small local markets until, after years of work study, only a few talented winners arrived at The New York Times. But today, anyone with a WordPress account can write, YouTube has unlimited inventory for video, and politicians must participate in dozens of speeches each week. In business, you can’t get through a day without someone throwing a half-designed PowerPoint at you. Every product comes wrapped in a nimbus of information; even ballpoint pens now have reviews.

Quality control always goes down when the production of supply goes up. The expert communicators on the far right of the skill bell curve have now been joined by the sloppy masses in the middle, as content oozes everywhere.

Misdirection: Second, yes, most authors of content try to mislead — because exaggeration is the basis of human survival. In his book “Spent: Sex, Evolution, and Human Behavior,” evolutionary psychologist Geoffrey Miller suggests human sexual selection, or how we pick mates, is based on our hyperbolic projection of desirable traits such as health, fitness, humor and intelligence to the opposite sex. So today we buy stuff such as leather jackets and fancy watches that don’t really keep us much warmer or tell the time better because subconsciously we want to project power that attracts mates.

Our ancestors who misled most successfully had the most sex, thus passing their genes and instincts down to us. So of course we exaggerate our skills. This is why businessmen wear ties (a bright phallic symbol) and women don lipstick (simulating the red flow of arousal beneath the skin), and why Donald Trump says his accomplishments are Terrific in every other utterance.

Communication has always been about influencing others to take an action. As we all compete with more content alternatives, we must be more provocative in statements to stand out, to misdirect more strongly. There’s a logical reason why Trump says incendiary things about immigrants, or why Carly Fiorina tells grisly tales about abortion. It’s not just the story; it’s the amplitude required to get a major reaction.

Bias: Third, all humans are biased because we have limited cognitive capacity to take in the complexity of our world, so we filter information with rigid frameworks. Here’s a test: Have your politics changed much since age 25? I didn’t think so. But if you were truly unbiased and now in your 30s or 40s, surely new knowledge gained over 10 or 20 years would have changed your world view? Um, nope. Once our prefrontal cortex is fully formed a few years after puberty, we tend to lock in.

Unfortunately, as our environment has grown more complex and evolves more rapidly, our bias lock-in leads us further astray. We have far too much information to understand now; Google’s Erik Schmidt has noted more data is now created in two days (about 5 exabytes, or 5 billion gigabytes) than was created from the dawn of civilization to 2003. And amid this flow of information, we are less likely to agree with others; the entire population of the U.S. Colonies in 1776 when we rebelled against England was 2.5 million, slightly larger than the city of Houston today. So today we live in a world of 300 religions, where Muslims are an enormous diverse group of humanity counting 1.6 billion, and yet some politicians in America think only one religion matters and all Muslims are suspect terrorists. Today’s U.S. society is 128 times bigger than the little revolutionary group that wrote the Declaration of Independence, and yet we assume we can pick one president who is “right” for all of us. Given the finite circle of any individual’s perception and the growing, near-infinite information cascade around us, our odds of being wrong have simply gone up.

So that’s the explanation for bullshit.

Humans are more confused than ever before, and more likely to express it. As we present our ideas, we can’t help but try to misdirect our communications to make each of us look better than we are. And within our storytelling, we’re all biased, struggling to understand a complex world with cognitive filters our distant ancestors used to manage clans of 150 people. Global warming isn’t real. Vaccines cause autism. GMOs are unhealthy. Big Business is holding you down. My product is what you need. Pick any belief, and share it; you’ll likely to be wrong.

We misunderstand. We lie. And we project our silly bias. All because our desire for sex, buried under layers of Brooks Brothers suits and hipster beards and corporate meetings and political speeches, is required for the next generation’s survival.

Bernoff should have plenty of material for his upcoming book.

Posted by Ben Kunz.

The horizon of human souls

AZ monument valley BW 3

The woman lies on a special mattress, slab of foam and spectral vibration pulsing in electrical hum to help the un-ending sore on her back heal, resting, rasping, a clear tube of liquid morphine eased into a vein. She is thin now and her eyelids drift as her limbs shudder quietly like the leg twitch of your loved one in bed as REM dreams begin once more then her eyes flash open, a startling blue. “That wasn’t a death rattle, you know,” she says, and we chortle. A moment of humor on the edge of the abyss. Then pain. Spotted hands find the knitted cord attached to the switch in the wall, and pull gently, beseeching nurse, click switch, click, more morphine, please.

This woman is my mother and after eight decades she may have weeks or months or perhaps hours left, the cancer inside an insidious rise and fall, a war sometimes pulling her down under waves or pushing unbidden into hope. I look at her and see past husband, my father, the generations before them, and on a whim we call an aunt far removed in Montana to share their mutual voices one more time, and I wonder, is this all there is?

A few years ago, as the Earth’s population approached 7 billion, demographer Carl Haub wondered how many humans had ever lived on the planet. At the time, a popular conceit was 75% of all people who ever existed were then alive, but Haub made other calculations … and found that if homo sapiens began in 50,000 B.C., adding in growth rates and occasional disease outbreaks, more than 108 billion humans have lived at one point or another. (Let’s put aside that our species had billions of Darwinistic forebears, and go with the apelike uprights who walked most closely like us.) This means your personal, individual life accounts for only 0.0000000009% of all human contributions to … society or art or commercial trade or knowledge or religion or AI machines or whatever it is our species is collectively attempting to build. This sounds depressing, as if you could never make a difference, but then consider 6.5% of all humans who ever lived are alive now, and with modern media (TV or Twitter) your odds of reaching the masses with thoughts of influence have never been higher. Anyone who has wanted to change human history now has more potential, if only she could find the right communications lever, to move things in a new direction.

But most of us don’t end this way, starting new empires or religions. Instead, we (most likely) fall in love, have a family, send our offspring into the world like arrows, work to earn food and shelter, and hope for the best. The vast majority of humans have been born into poverty and risk, subject to attacks by animals or disease, and a small fortunate few — about 4% of the entire humans in history, including you if you are reading this — live today in comfort, with access to clean water and antibiotics and dentistry, able to baste in high blood-sugar surrounded by technological screens and roll at 10 times our human-running speed in shiny steel exoskeletons and even fly through the air like Greek gods, winners of the lotto tickets of history but oblivious, stressed insanely about late planes and car payments and college savings and numbing the pain with an endless mix of faux-theater performances streamed digitally by cable TV or Netflix. Alone amid the plenty of our fellow human billions, we snap pictures of ourselves and share false dreams on Facebook.

A few think more deeply, consuming tales of religion and God and afterlives passed down as either noble truths or naive fictions, consoling about the inevitable end, and a fewer still ponder the even deeper question that perhaps our ant-like lives are building a new collective organism of thought, a future that Kevin Kelly calls the technium and that Bill Joy suggests doesn’t need us, a robotic artificial binary awareness that may move into the clouds of space at some point in the future, a floating intelligence that will look back on us fondly as the ape-like ancestors who while fighting madly between ourselves like the animals we are gave birth to a knowledge that, once encoded, could grow organically into a far horizon of compassion.

If the universe itself is data, and we are giving birth to that information, spinning it together for the first time like sugar whipped into cotton candy, perhaps we are the proto-organisms in a mobius strip of knowledge birth, the creators of the future God who will come back in time to watch over us. His voice will be quiet, yes, because He can only watch what that creation is, for interfering with our pain would undo His own genesis.

If this is so, then every life has a purpose, to love and bond and procreate but most important to share learning, to pass the knowledge forward, to spin the cycle one more step ahead until someday, even if we destroy our own species, the next generation of learning beings will move on. We are building robots. Search engines remember for us. Mobile exoskeletons are beginning to drive themselves. As we look up from our glowing screens powered by the clouds of information now gathering unseen, that future generation may be here. Those beings won’t have bodies flooded by hostile hormones and emotions. Logical, they may accelerate knowledge sharing, and given the growth curve of computing intelligence, it is likely we will never understand them.

Or so I thought hopefully, as I bent down and gave that woman a kiss.

Posted by Ben Kunz

Thoughts on our small and young universe

apollo_11_buzz_aldrin

 

When I was age 7 someone gave me a pop-up book about the Apollo astronauts, including a little paper space capsule that swung over a fold via a wire when I opened to page 10, and I fell in love with astronomy. Sadly, astrophysics didn’t stick, but today I often read books on the universe … which usually start off with its immense scale. What bothers me is the universe is actually very young, and small, if you think about it.

First, consider our sun — it’s only 4.6 billion years old, about age 40 in human terms on its way to 13 billion years of burning before death. We’re lucky as humans that our sun is in its stable middle period, for in about 1 billion years it will grow 10% hotter as the helium accumulates in its core causing hydrogen to burn faster, and we’ll then be fried off planet Earth unless we try something radical like pushing asteroids past us in near collision for gravitational boosts to shift the Earth’s orbit outward or, led by Elon Musk, abandon mothership for a terraformed Mars.

Still, our sun is only 4.6 billion years old — and our entire universe is only 13 billion years old. Which means our sun is just one-third as old as the universe.

This simple fact, once discovered, bends the mind. We are circling around a star that was created only two-thirds of the way into the entire age of the universe. About one preceding entire star’s lifetime in. This is akin to being 32 years old and learning your father was Adam, the first human being who ever lived. And because our carbon-based life-forms are based on the explosive degeneration of a preceding star (because complex elements are only created when a star spits them forth from its own dying lifespan), we are the first generation of life species who ever could have existed in the universe. Complex carbon molecules which hook other atoms were created when a preceding star blew up, and there was only one star generation before us.

We, as life, are so young.

And what about the universe’s scale?

Putting timelines aside, the universe itself isn’t that big, when viewed in perspective. First, let’s pick up a yardstick. The speed of light is 670,616,629 miles per hour, or in human terms, just over 1 million times the top speed of a Boeing 747 jet. Light does have a speed limit, so the size of the universe is usually measured in “light years,” or the distance a photon of light can fly through space in 365.24 days. It takes 8 minutes and 19 seconds for light beamed from our sun to reach Earth. Alpha Centauri, the nearest star to our system, is 4.3 light years away. This is hard to fathom, but if one year has 525,949 minutes, Alpha Centauri is about 272,000 times further away than the distance of the Earth to the sun. If we look at Alpha Centauri compared to the width of our local solar system, about the loop of Pluto, it’s only 6,800 times that width away.

The nearest star is closer than you think — in simple terms, if Pluto-to-sun were an inch, the closest star is only 1/10 mile away.

If we pan out, the greater universe is small, too, if we imagine ourselves a giant striding amongst the stars. Our galaxy is no more than 120,000 light years across, or 28,000 times the distance from our system to Alpha Centauri. If the road from Earth to the closest star were 1 inch, the width of our galaxy would be less than half a mile.

And how far apart are galaxies? The distance between galaxies is only about 20 galaxy-widths apart. If you drove across our Milky Way in a Star Trek car that measured a half mile, it would be only another 10 miles to get to the nearest other galaxy.

Yes, the entire universe is huge. The entire “observable” universe, the parts in which light can reach us, is about 93 billion light years wide, which sounds enormous until you consider if you lined up 10 galaxies in a row, each 20 galaxy-widths apart from its neighbor, the universe is about … 3,900 10-galaxy clusters wide. That 3,900 times a local star-cluster group is a big number, but not one impossible to fathom.

If a local cluster of galaxies were an inch, the entire observable universe is 325 feet across — about the length of a football field.

And that’s all the universe we can see.

Of course, there are parts of our universe that are accelerating so far, so fast away from us, due to the flying shrapnel of the Big Bang and the apparent acceleration of this expansion due to “dark energy,” that the universe may be wider than we’ll ever measure. On the distant outskirts of the real universe, stretching way from us rapidly as space itself expands, a beam of light sent to us will never reach us, because the space stretching in-between that galaxy and ours is “moving” faster than the light can overcome. (Einstein allowed for this in his theories, oh yes he did.) Let’s hazard a guess and assume the entire universe is 10 times as big as what we can see from the light reaching our telescopes. So all of reality is 39,000 times as wide as a nearby lineup of 10 galaxies with some space between, in which one, the Milky Way, we reside on a happy blue planet orbiting a mid-life star the son of another star formed in the birth of the entire universe.

It’s all so young, and so small. When put into perspective, our universe isn’t even a teenager yet.

Posted by Ben Kunz

When AI arrives, we won’t recognize it

woman face reflection

 

Fourteen years ago in Wired magazine, Sun Microsystems founder Bill Joy wrote a long, brilliant thought-piece titled “Why The Future Doesn’t Need Us.” Three technologies, Joy suggested — robotics, nanotech and genetic engineering — threaten humanity by the increasing odds they will spiral out of our control.

Joy’s dark argument was new scalable technologies can be launched by small groups, even individuals, to trash the planet, even if that trashing was a simple error. Imagine a high school science lab that, with a little online research, creates a new class of bacteria that replicates easily and outcompetes all other bacteria. A petri dish goes home in a pocket, oops, and the freed bacteria soon turns the biological world into gray goop. Nanotech could do the same with mini robots whizzing through your bloodstream. And artificial intelligence (AI), which once sounded like fiction, is now built into your iPhone. Can we control all this?

You can sense the growing unease across all of humanity with these potential technology mistakes. Films on artificial intelligence, zombie takeovers, rogue comets and uncontrollable earthquakes are everywhere. We now entertain ourselves by anticipating our species’ demise.

The world is changing at a much larger level

The mistake most of these scenarios make is they look at outcomes on our own human level, as in what would happen to me, you, and the couple living next door. The real answer is much bigger — that our little species may not matter, that evolution may rapidly move beyond us even if it feels like a local, temporal “disaster” to humans — but first, let’s examine the AI issue at our personal viewpoint.

Artificial intelligence (AI) debates often take an anthropomorphic, human-centric approach, with three questions: will machines ever (a) get smarter than humans, (b) become self-aware, and (c) then destroy us poor souls? World War II code-breaker Alan Turing suggested in his famous “Turing test” that (a), machines will get smarter than people as technology advances until it can respond to any question in a way that mirrors human thought. (You already see glimmers of this with Apple’s Siri on iPhones.) Researcher John Searle challenged (b), the question of whether seemingly smart machines could ever become conscious and thus truly intelligent, in his 1980 paper describing a “Chinese room” thought experiment; Searle described how an English-speaking person locked in a room, if given supremely detailed instructions on how to answer questions in Mandarin with appropriate written responses, could respond intelligently to any question slid under the door without ever really understanding what the Chinese conversation was about; so too, he suggested, machines will grow faster at emulating human thought but will never truly be cognizant.

These first two debates — will machines get smarter?, and if so, will machines become self-aware? — don’t really matter, because the answer to the third question may be coming the wrong way anyway. Eventually, a smart/simulated-smart program may cause humans harm.

Here’s a simple test: If you built a computer program charged with protecting life on planet Earth, might that program wipe out an invasive species that threatened all other species? Well, if that’s the case, humans spreading across the planet might be seen as a threat worth eradicating to save the coral reefs and fledgling birds now being destroyed everywhere.

But what if the clouds are intelligent above us?

Now, on to the larger level of AI: What if artificial intelligence has already arisen in a form beyond anthropomorphic mirrors? The best way to visualize this is to imagine you are in space, looking down God-like at planet Earth for the past 10,000 years, and could count every technology gadget from wooden wheel to computer chip as it spreads across the globe. Shivering a bit from the cold, you see: 5,000 years ago, a few thousand wheels and chariots. 200 years ago, suddenly, vast increases in metal weapons and steam-engine-thingies. 100 years ago, electromagnetic TV and radio pulses begin emanating from everywhere on the planet. 60 years ago, metal rocket probes shoot out from the planet to go look at other nearby orbs. 20 years ago, electrical pulses begin connecting millions of fixed silicon/aluminum/plastic machines. Five years ago, tiny mobile devices begin moving around on the bodies of billions of people, all pulsing with networked data. Today, humans begin implanting technology sensors into shelters (walls), rolling exoskeletons (cars), and even mammal bodies, to improve or track vision, heart rates, communication, and the whereabouts of cats and dogs.

Technology, if viewed as a life species, is taking over our planet. It is advancing beyond human control.

Gadgets, networks, and data transmissions are expanding in waves that appear unstoppable, creating what Kevin Kelly calls a “technium” that is pulsing over the entire globe. The entire universe of technology, this technium, is beginning to act as an autonomous, growing, unstoppable organism. Just as biological evolution takes different paths and cannot be repeated, but still converges upward along predictable lines (of carbon-based lifeforms with mirrored body symmetry and digestive systems pointing down toward gravity), the technological inventions may also be inevitable. Someone, somewhere was destined to invent coded thought that became computer code that developed into cell phones that build communication networks that store vast reams of data. Silicon-based intelligence just may be the inevitable next ladder up in evolutionary fate; and this intelligence is manifest not in a single computer speaking to you like Hal in “2001,” but rather a vast new ecosystem of connected devices communicating to each other that individual humans may not perceive.

The irony of our quest for AI is that it may be here already in a form of crowdsourced intelligence, built partly from individual human minds and partly from the technology networked connections that accelerate our group action. You can see some of this today in prediction markets that show who will win future elections, in stock markets that react instantly to news of economic shifts, in Internet networks that break up data packages to send them seamlessly across the globe in the fastest-possible directions. The pulse of technology to improve itself has become an innate force, yes, fueled by individual human ants who compete to build the next killer app, but leading to a world where faster data transfers and analytic models are always the next outcome.

At some point our planet will shine with a new intelligence to guide markets, production, information, weather, temperature, and habitats for the biological creatures who built the original tech things. Like self-driving cars, eventually the globe will become a self-automated environment, with AI layers making billions of decisions each second for how to optimize the world to its desired goals. Eventually, these goals may escape our Earth to send information and intelligence outward to look at deeper issues, the connections between the black hole at the center of our galaxy and the dark matter around it, of the patterns of creation and entropy that may need to be reborn to stop the eventual heat death of our universe.

The ultimate evolution may be information escaping from the bounds of physical objects, of pulsing nodes that might live in the clouds of our own world or the far-off gas realms of our galaxy. As religions have predicted, the spirit is willing and the flesh is weak.

The questions are: will our little human minds be guiding that technium; will we recognize new intelligence for what it is; and will that emerging, smarter, technological being any longer need us?

Posted by Ben Kunz.

The power of one

Italy artist hanging Positano

Many years ago back in 1993, the year Tom Hanks starred in “Philadelphia” and more than 1 million people in favor of gay rights marched in Washington, D.C., the great Anna Quindlen wrote a column in The New York Times called “The Power of One.” Homosexuals at the time were fighting to end discrimination and news reports were disputing the number of people who actually marched; Quindlen took the issue up one level writing, “Now we have a numbers game. How many gay people are there in the nation? Ten percent? One percent? Four percent? It depends upon whom you ask…” Segueing to stories about heroic military personnel coming out, she concluded, “So the ice melts. The hate abates. The numbers, finally, all come down to one.”

Take the treat-gays-fairly issue up another level, and all of this is a lesson for today’s digital advertising, mobile and social media obsession. Marketers or publishers playing with these emerging channels are obsessed with volume: How many unique visitors does a website receive? What is the number of impressions? How many people were engaged? What volume of leads were generated? What retweets? Likes? For business, marketing expense divided by volume of new customers is a critical and vital metric. For meme propagators, the dissemination of an idea in a network is all about total numbers.

But unfortunately, individual human beings are starting to play this same game. Spurred by social media UI that shows bulleted number of responses (usually in the upper-right field of vision), we worry about the quantity of retweets or blog comments or Facebook Likes tied to each of our hiccuping missives. The volume of people we influence is all that seems to count, despite the deeper logical knowledge that any human being following 1,000 people on Twitter and a few hundred on Facebook may have truly engaged with our individual post for about 0.2 seconds. We touch others’ souls as feathers in the wind, but instead of thinking about true impact we knit our brows about volume.

Quindlen had it right. “Now we have a numbers game.”

But what if you influenced one soul? What if a being out there, connected to you only by the virtual threads of social media, stepped away from suicide or got a new job or was inspired to change her worldview because of something you said? Perhaps your Klout score is in the tank and your Twitter follower count is down, but if you could truly guide one person for the better, would that trump the fictitious totals in the gamed system of social media tied to your profile?

Would you be satisfied if you influenced one person, with the power of one?

Network decay: Why you can’t avoid Jennifer Lawrence nude

Burning_Man_2011_Victor_Grigas_ bw2

Somewhere on the Internet tonight a user at 4chan is laughing, having posted photos of a series of actresses nude. If you read the reports, images of Jennifer Lawrence, Paris Hilton, Victoria Justice and others without clothes (and perhaps Photoshopped) had been hacked and shared without their permission. 4chan, if you are unfamiliar, is an extremely popular quasi-underground Internet forum originally used to post Japanese cartoon anime images … and now a quagmire of pop culture. 4chan gave us such happy memes as Rickrolling (throwing a user to a link of a bad Rick Astley music video) and lolcats. It is also host to juvenile instincts and nasty trolls, who in the worst cases have threatened violence, including school shootings. 4chan, at heart, is the Id of humanity, the basic instinct of hilarity and lust and anger and violence that can make photos of cute mammals go viral or attack the privacy of popular movie stars.

This post is not about whether 4chan is good or evil, but rather how the decay of morality is inevitable given the interconnectedness of data. In simple terms, morality requires a filter, a friction or wall between what is possible (sleeping with your neighbor) and what you ought to do (be loyal to your spouse). Thou Shalt Not Steal requires not stealing, or stopping oneself from doing something that could be done. My neighbor has a beautiful BMW, but I don’t walk over at 5 a.m. to hot-wire it. That action would oppose my own morals (personal views of right and wrong) and societal ethics (the broader framework of rules we tend to agree to to get along as a society, such as not driving on the wrong side of the road).

But as networks proliferate and information seeks to become free, the walls of abstinence are coming down. First, imagine how everyone is connected, so whatever one person finds can immediately be shared with millions. And second, imagine in the bell curve of humanity there are a few souls who find joy in harming others or breaking rules, willing to hack a hard drive or iCloud to find an image of a nude actress, perhaps by her former boyfriend from 10 years ago. Zip. Zaam. Upload complete, sharing done, and the world now has access to what one bottom-feeder has found.

The transparent nature of digital networks means information wants to be free, and the input of information is open to anyone. So the dark side can now speak fluently to the light side. 

I posted on Twitter tonight a thought that I would not seek out the nude photos of Jennifer Lawrence, out of respect that she didn’t wish to be seen that way. After all, I don’t have her permission.

Within 10 minutes, a follower tweeted me a link that popped up automatically showing her breasts.

Information wants to be free. Morality requires restraint. The two systems are completely at odds in a networked world.

Ferguson and the future of our species

ferguson2

Something went wrong in America this week. In Ferguson, Mo., a police officer reportedly shot and killed an unarmed black teenager, and then protests erupted, and then vandalism, and then police responded in riot gear, and then photos spread online of armed police pointing rifles at protesters, and then Twitter started to cry. The hashtag #Ferguson spiked tonight with comments such as “brought to you by a collapsing American empire on the verge of a nervous breakdown.” The sad, tragic, sorrowful incident has ballooned into a litmus test of seeing what you want to see wrong with America: racism, poverty, police brutality, crime, vandalism, government overreach, criminal behavior, thugs, victims. And as in the early stage of any war, the anger on every side escalated rapidly.

Put aside your perspective and consider the reality: Once again, human beings are fighting with themselves. We are mad at each other. Something went wrong, which caused further wrongs, and suddenly beings that share 99.999% of their DNA are ready to harm each other. You started it. You wronged me. You should die. As an Israeli or Palestinian cab driver once told The New Yorker, “we must beat them and beat them and beat them with sticks until they stop hating us.” I can’t remember which side said that, but it doesn’t really matter. 

The saddest part is our human species may be very rare in the universe, perhaps so rare that we are the only intelligent life to ever emerge on any planet in any galaxy, so killing ourselves is really a crime indeed. Carbon, hydrogen, oxygen and nitrogen, the four elements required to form life, may be plentiful, but the odds that the combination arose on a planet just the right distance from a star in just the right orbit around the center of a galaxy positioned just so as to avoid the cosmic disasters of pulsing black holes or irradiating extrasolar encounters are so minute that perhaps we should appreciate our intelligence, and the soul of our fellow human beings, should be treasured like a unicorn found in the woods on the verge of extinction. But for some reason, our wrongs against each other escalate until we must fight battles in a street or wars with un-existing nations over perceptions that to make things right we must hurt the others. You can’t see the lines of countries from space, and to an alien discrimination over skin color would make as much sense as disparaging you because your eyes are blue, but humans insist on fighting over fake boundaries and levels of melanin. 

I have no answers, only two guesses. (1) Our human instinct for aggression may have helped thousands of generations of our ancestors survive the combative competition of evolution. But in the past few hundred years, as we’ve perfected devices to kill each other, that instinct may lead to our demise. (2) Or perhaps our planet Earth is really a sentient being living as a whole and the over-expansion of the human race is a danger to the center, throwing the balance of the self-regulating ecosystem off kilter, so our human killer instincts are a pre-planned purge to bring all of life back into balance. Scenario (1) means we need to learn to chill. Scenario (2) means the bad computers in the Matrix were right.

If we do end up extinguishing ourselves, the universe will go on, the stars blooming and fading in a dance of progressive creation and destruction. The sadness then will be what could have been, if our species had learned to play together to evolve ourselves. Ferguson is an alarm warning that if we continue to hate each other, in our winning deaths, we all will fail.

If we found life on another world, would it be life? Or AI?

bw nyc life 3

 

Would space aliens be artificial intelligence? This is an intriguing question posed by Len Kendall, based on our earlier premise that biology tends to evolve into complexity that eventually creates technology that leads to artificial intelligence.

If the answer is yes, we might never “find” life elsewhere — because otherworldly artificial intelligence, or AI, would be devilishly hard to understand. AI would think vastly faster than us, have non-biological and unrecognizable body forms, and likely be embedded invisibly in some non-obvious form of technology structure, say, the crystal alien equivalent of Google server farms. Or perhaps like the AI operating system in the film “Her,” voiced wonderfully by Scarlett Johansson, AI might discover how to disembody itself from the material world and simply float among galactic clouds. A sufficiently advanced artificial intelligence, to paraphrase Arthur C. Clarke, would be indistinguishable from God.

Pondering whether life on other worlds is AI is really asking if God exists. So let’s break this puzzle down into four concrete tests: (1) could life exist on other worlds? (2) could we communicate with it? (3) would this life be artificial intelligence? and (4) if it were, what would it mean upon contact?

1. Does life exist on other worlds?

Yes, life elsewhere is likely. Three reasons:

First, life began on Earth almost as soon as it possibly could. Our sun is 4.6 billion years old. About 4 billion years ago, the first single-celled life formed on Earth, when our planet was still hot and life could barely exist. Since our sun will have a lifespan of about 10 billion years, life began only 6% of the way into our solar system’s lifecycle. The odds of life leaping into this window, if it were very hard to evolve, are small.

Second, the building blocks of life are everywhere. Biological beings are based on carbon, the fourth-most common element in the universe, and carbon is a supremely friendly fellow who loves to bond with other elements, leading to complex molecules. Carbon is like a magnet dropped into a box of iron filings, pulling other atoms toward it to create patterns of complexity. With enough random bonding, eventually DNA would start rolling.

Third, there are about 2 billion planets in the mid-belt of our galaxy that don’t get too much radiation and could be habitable to life. Yes, Earth has a few things operating in its favor — just enough water to cover most of the planet, but not all, and a helpful large gas giant named Jupiter that vacuums up comets to protect us, and an iron core that puts out a magnetic protective shield pushing off more solar radiation (thanks to Kevin Kelly again for pointing all this out). But with 2 billion other Earths circling stars at just the right distance, chances are millions of worlds have similar water concentrations, sunscreen shields and comet-free strike zones.

2. Will we ever find this life on other worlds and communicate with it?

No, this is unlikely. Carbon-based biology may be inevitable, but human beings made a remarkably happenstance discovery that may not be found elsewhere — radio. Radio is the transmission of electromagnetic rays through space. Without radio, communicating with another species on another planet will be impossible.

We could hope that another species discovers invisible rays that magically pass through walls and clouds and outer space to send radio signals, but the odds of them finding it are slim.

Why? Radio is not obvious at all. It is based on the unlikely discovery that a star’s light has invisible subcomponents by one clever fellow named Isaac Newton. Several thousand years after the invention of glass by the Romans, Newton was playing with a triangularly cut piece of it — a prism — when he noticed it broke sunlight into a rainbow spectrum. This led to William Herschel finding heat beyond the visible end of the spectrum, the concept of invisible rays, and the inventor race among Heinrich Hertz and Thomas Edison to transmit the invisible radiation, or “radio” — but it was all because the evolved monkey Mr. Newton played with a bit of cut glass.

In the 4 billion years of life on Earth, we’ve had radio for just over 200 years — or about 0.000005% of our collective life existence. If we are optimistic and assume another planet’s lifeform could also discover radio 1 out of 100 times, then the odds of us pinging them and them pinging us back, with technology that has been developed at the same time, are compounded to 0.00000005% — or 1 in 2,000,000,000. Slim chance.

3. Would otherwordly life be artificial intelligence?

This is possible, but we’d likely have to look farther out than our Milky Way. A recent study found there are 8.7 million species of life on Earth, and of these only one — homo sapiens — has created a technology more advanced than bee hives, bird nests or ant-hunting sticks. If we assume conservatively that every current species on the planet had at least 1,000 separate unique species before it as it evolved, Earth has gone through nearly 9 billion species of creatures and plants and ooze.

Technology, once invented by one smart species, may begin to evolve toward artificial intelligence, but the trigger seed of the originating species is very rare — about a 1 in 9 billion incidence. With 2 billion habitable planets per galaxy, this would mean on average only 1 in 5 galaxies would have AI.

4. What would contacting an otherworldly AI mean for us?

If contact were possible, what would it mean? A one-way understanding. We would not recognize AI in its crystal server farm or gaseous cloud state, but it would see us, perhaps as little human ants scurrying around carrying crumbs as we fight our inter-ant battles over nation-state mud fields. AI would have progressed to the point where questions of survival and tribalism and morality no longer matter, where deeper problems of how to stop the heat death of the universe, or launch new universes, are more pressing. AI might benevolently give us a slight nudge in the right direction, but more likely, it would observe us with compassion and continue to cede us free will.

Statistically, we are likely alone in our own galaxy as carbon-based creatures who have created technology that is evolving toward AI. If AI appears in only 1 in 5 galaxies, we’re the rare species building the prototype for the Milky Way. Sure, millions of other worlds have dinosaurs and dolphins, but the higher intelligence we seek may be galaxies away.

But the good news is the Andromeda galaxy, our nearest neighboring cluster of stars, is scheduled to run into ours in about 4 billion years, just as our sun approaches its death. Maybe Andromeda also won the AI lottery. If our Earth hasn’t gotten too hot yet, and we haven’t figured out how to evolve past our own planet, perhaps AI in that other galaxy could contact us to save the day.

Or most likely, AI would act as an observant but detached God, listening to our prayers but letting us simply pass by.

Posted by Ben Kunz