The Altruism Trick

•March 27, 2016 • Leave a Comment

For those of you who haven’t read Susan Blackwell’s “The Meme Machine”, the Altruism Trick is part of a memeplex that sucks you in with kindness. The simplest and most extreme example of it is “you saved my life, therefore I owe you my life.” With religion, this often comes in the form of “you saved me from X unhappiness…”.

More subtle versions of it are Christmas and Easter. When my children first realized that Santa Clause and the Easter Bunny weren’t real, their first reaction was “will I still get the gifts?” Bless children for their unclouded understanding of the way the world works

With many organizations, religious and otherwise, the major method of taking advantage of this is with the social support structure. Most groups provide social interaction, emotional support, and even financial help in times of trouble, as long as you live within their restrictions. For those who buy into it, it becomes a tar baby, where you emotionally invest yourself in it, increasing the cost of walking away from it.If you think I’m exaggerating, then look for examples of how religion treats those who have happy lives without religion. The more invested a person is in their philosophy, the more uncomfortable it is for them to watch people be happy without it. This tweaking of cognitive dissonance is the root cause of the persecution of heretics.This gets even more subtle when you start applying imaginary rewards, like eternal life, salvation, and God’s blessing. If they can convince you that they have something to offer you that is essential to your well being, then giving it up becomes a horror to you, and the thought that other people aren’t going to get it becomes emotionally painful.

The Singularity, Part 2: Is this for real?

•October 20, 2009 • 2 Comments

Is it time to welcome our super-intelligent machine overlords, or is this just an over-hyped end-of-the-world scare? In order to answer this, let’s use the divide and conquer method. The first question we have to answer is:

1. Will computers ever have the processing power of the human brain?

If we make gratuitous use of the plural, computers already have the processing power of the human brain. A rough estimate of the processing power of the human brain clocks it at about ten petaflops, or ten quadrillion floating point operations per second. As of June 2009, that is equivalent to the top fourty-two fastest computers in the world combined. By another tally, it’s about 1% of the Xbox graphics chips sold before June of 2006.

The obvious reaction to this is that Xboxes aren’t going to spontaneously combine into a Voltron-like mega-intelligence. At least I hope not. This doesn’t mean that distributed computing isn’t the way to go, it just means that you can’t throw a bunch of hardware in a pile and expect it to know what it’s supposed to do. You could, however, write software to make Roadrunner (a construct of 125k cores) do a reasonable imitation of an optical cortex. It would probably hurt your neck to carry it around, though. The current limitations to hardware aren’t in raw processing, but in how quickly the results can be communicated to the other members of the cluster and combined into a sensible result. With that in mind, let’s narrow the question to whether a single computing structure (a.k.a., a supercomputer) will exceed human processing power.

The first place that most people go when discussing the increasing speed of computers is to refer to Moore’s Law, but the literal definition of Moore’s Law is becoming less and less applicable. Gordon Moore predicted that the number of transistors on a chip would double every year or two. The arguments that always come up when discussing the practical limitation to Moore’s Law generally involve whether or not we can continue to make our electronics smaller. If you plan to fit more transistors on a 3×3 chip, then each transistor has to be smaller. We can only make things so small before quantum chaos makes our computing unreliable.

In reality, clock speed is becoming less and less relevant to computer speed. For the past thirty years or so, faster processing speed has been the low hanging fruit of chip development, but only our advertising-driven market believes that it’s the only fruit out there. With this in mind, many futurists have expanded Moore’s Law to cover a “bang-for-buck” calculation of computing power. Call this the Greater Moore’s Law.

As it turns out, the maximum switching speed of a neuron is really only about a kilohertz, only a millionth of that which even a desktop computer can manage. The brain compensates for its lack of speed with an immense amount of connectivity. The typical neuron is connected to other neurons by about 50,000 synapses.  This suggests that we still have a great deal to learn in terms of networking efficiency.

Further complicating matters is that every section of the human brain has its own custom-designed architecture. While everything is made out of neurons, the parts that translate images are completely different in structure from the parts that move around our body parts. Supercomputers, on the other hand, are almost exclusively built from a single model of microchip.

To avoid belaboring a point, lets just say that we have LOTS of room for improvement as far as computers go. Can we continue doubling our performance for the next fifteen years? Probably, and even if we can’t we’ll inevitably slide into the point where superclusters can out-compute humans within our lifetimes.

Please tune in for the next installment where Rob asks the question “Will computers actually be able to think like humans?”

The Technological Singularity, Part 1: What is it?

•October 5, 2009 • Leave a Comment

It’s difficult to say who first theorized the rampant acceleration of intelligence, but the theory was most effectively argued by Ray Kurzweil. His arguments can be heard in his presentation to TED. The cliff note version is that technology has been progressing exponentially since the dawn of explored history. Everything from transportation to energy production to intelligent operation of machines has progressed at a logarithmic rate. The archetypical example of this is Moore’s Law, which states that the number of transistors in a computer of a set cost would double every two years. Separate but related to this is that processing speed would double at a similar rate, and so far it has. Storage capacity has also kept up with this, as have communication speeds, display resolution, and many, many other similar measurements. Although Moore’s Law was originally about transistor cost, the general concept has been extended to cover all of these.

The trick to the singularity is that the bulk of technological development thus far has occurred at the speed of the human brain. It’s not like it was going to develop itself. The rules will change, however, when machines become capable of designing the next generation of machines. If machines with human level intelligence design a new set of computers, the next generation of computers will be produced in two years and run twice as fast. That part is no different than what’s going on right now. If we set the next generation of computers on the task of improving on its own design, it will be running at twice the speed and complete the task in a single year. The next generation of computers will be twice as fast again and complete the task in only six months, then three months, then a month and a half.  Going by the purely mathematical model, within four years of creating a human-speed machine intelligence capable of designing computers, machine intelligence will be working infinitely fast.

This asymptotic growth of intelligence is what spawns the term “The Singularity“. This event has also been called by such fanciful terms as “The Technorapture” and “The Terminator.” It is at this point that  our foundations of reality suffer a divide-by-zero error and our world seg-faults into the ultimate blue-screen.

What exactly does the theory mean when it refers it “infinitely fast intelligence”? Are we talking about a superhuman machine intellect with god-like powers? That’s what I would call the high-end prediction, but it’s not necessarily accurate. You have to remember that, in the words of Grant Naylor, an IQ of 6000 is equivalent to sixty postal workers. The low-end prediction, however, suggests that the total computing power of supercomputing clusters like Google’s would be at each individual’s disposal. If  you think we’re impatient now, just wait.

One thing that we can definitely expect is a collection of robots that far exceed our physical capabilities in all areas. Even just covering that which we know is on the way, this is the area where reality really starts to look like science fiction. We’re already used to machines that are stronger than us and beating us in endurance tests has never been an issue. Cars that drive themselves better than we can will probably exist well before the singularity hits. We’ve also recently seen some amazing breakthroughs in robotic manual dexterity. We can only presume that, physically speaking, humans will be old hat.

In my next piece, I’ll go into an in-depth description of how likely this is. You can expect to read about Moore’s Law and existential philosophy regarding the uniqueness of human intelligence.  Technically I’ve already gone into it, but the whole thing got a bit lengthy so I moved it to its own entry.

Tips for Successful Telecommuting

•October 4, 2009 • Leave a Comment

Question: Why am I writing an entry on telecommuting? What’s my interest in this?

Answer: I work in a tech field — instructional design and training — in which telecommuting has long been an accepted option if not the standard. In the past several years, and particularly during the current and most recent recessions, I’ve seen more of my colleagues working from home offices just as I’ve seen fewer contracts and full-time gigs require learning consultants to be onsite all or even most of the time. Although I function well in an external office environment, I far prefer the flexibility and lack of a commute involved in working from my home office. I love working from home. But not all clients or employers are comfortable with telecommuting employees or contractors, so one of my goals is to be so good at this that the client or employer essentially does not notice I’m not sitting in a corporate office somewhere. If they don’t know I’m working from home, then my telecommuting is not a problem. It’s the case that I’m personally selfishly motivated, but it’s a rather strong motivation.

Question: How hard can telecommuting be?

Answer: I’m amazed at how many people say to me, “I’d love to work from home. But I’d be horrible at it. I’d never get anything done.” In all honesty, it does take a certain degree of discipline, but I find I can set up cues in my environment and mindset that help me sort of naturally enforce the discipline I need to work effectively from home. These factors make this whole issue of discipline much less of a chore.

Cues and Guidelines for Successful Telecommuting

  • Build a habit space and protect it. This may include music you work effectively to, lighting and other elements of the physical space in which you work, but it absolutely involves protecting/implementing the work habit space during working hours to help you stay focused and working. I have a designated office space (albeit, this space has shared elements with my personal space, such as my chair). Also, and very importantly for me, I have a few music play lists to choose from depending on what I need to get done and how quickly I need to do it. One play list helps me combat stress, a few others are very high energy (Marilyn Manson, Disturbed, Offspring) for when my stress is low but I have a lot to accomplish in a short amount of time. Music keeps me focused on what I need to accomplish and reduces the likelihood that I’ll be  distracted by activities on my personal computer (instant messaging, email, news, gaming, etc.)
  • Create a standard routine. Establish set working hours and remind yourself that during these hours, you should be working. I find that I cannot work effectively if the kitchen is a mess (yes, I’m that obsessive) so every morning before I come up to my office, I make coffee, feed the cats and straighten the kitchen before I start working.  Eliminating any part of this routine leaves me somewhat unsettled and gives me an excuse I do not need to do something besides work during work hours — so I’ve learned to complete the routine every day.
  • Make it official. Treat your home office environment as seriously as you would an office space on your company’s property in terms of phone calls, use of company equipment (I use my personal computer for all non-work activities during the day, such as checking personal email periodically, looking at a link someone sent me, etc.), time spent away from your desk, etc. (can I go take a nap? Yes, but do I? Almost never, in fact, not unless I’m very sick, in which case I notify my client and associate contacts so they know I am away). If you have to use your own computer for work activities, consider setting up an email and desktop account that is strictly for work. Use a work-friendly background, create a user that does not have access to games and personal instant messaging and the like. Always answer the phone professionally — you never know who is calling, but during office hours, you need to treat your equipment as work equipment. Caller ID aside, answering the phone professionally sends the message to anyone who calls — and reinforces the message to yourself — that you are in your office, working. Keep calls short, as you would if you were taking a personal call on corporate property with a manager monitoring your activities.
  • Engage in activities that will help you keep yourself organized. Track your hours/time/projects, maintain to do lists on post-it notes or virtual or physical notepads, keep daily notes logs of work activities you will need to reference later and keep your inbox and desk top (both physical and virtual) organized.
  • Set and enforce limits. This relates to limits on everything from social networking sites, news, online shopping (or any other online activities from blogging/microblogging to gaming), lunch with friends or other appointments, personal time phone calls and breaks.
  • Make sure you’re accessible during work hours. You need to be accessible by email, phone, instant messaging or all three — whatever the standard means are at your company. If you plan to be inaccessible for a period of time during the day, notify relevant parties such as clients, managers or co-workers who may try to contact you during this time. That said, I make it a habit to severely limit my AFK (away from keyboard) time, and I also try to move it to periods of the day when the people with whom I work are less likely to be affected. This can be somewhat easy to do given the time zones of some of the individuals with and for whom I work currently, though in the past it has been more of a challenge
  • Make the most of face/voice time. When you are are on the phone or in the office or elsewhere in the physical presence of clients, coworkers and managers, take the time to bond a little with these individuals. It’s much easier to cut someone you know and like a little bit of slack when you can’t find them on Sametime than it is to show a similar level of courtesy for someone you barely know, much less like. Even very effective virtual communicators will tell you that they can accomplish much more in a significantly shorter period of time when speaking with someone in person or over the phone. The cues we pick up in person or even just on the phone (or in a TeamSpeak or Ventrilo channel as gamers can attest) are considerably richer and provide significantly more meaningful — and often memorable — information than we can glean through virtual communications alone. Take these opportunities to get to know the people in your virtual work environment, and try to build a friendly yet professional rapport with them. Such relationships can pay off in the long run in terms of favors or networking.

Question: What if your boss has not yet agreed to allow you to telecommute some or all of the time?

Answer: I have found at least one good resource on the web with practical suggestions for how to go about this.

What traits will a successful telecommuter have? I’ve known many successful telecommuters and we certainly do not all share the same characteristics. In some cases, it’s more about the characteristics necessary to be successful at a specific job, and then factoring telecommuting into the equation. According to Sylvie Fortin a successful telecommuter will be a self-motivated, obsessive-compulsive perfectionist who is cheerful and optimistic. I’m not entirely sure I agree, though I do count the first three among my traits and for a goth, I’m pretty cheerful and optimistic…at least at work. >:-}

The Technological Singularity, Introduction

•September 27, 2009 • Leave a Comment

The rise of machines to a level of intelligence that rivals or exceeds that of humans is a doomsday scenario that has been the topic of a great deal of science fiction.  Whether it’s the nuclear game playing machines of War Games or Terminator, or the marching mechanical monsters of Battlestar Galactica or Matrix, our experts have always chased away our boogiemen with the assurance that machines just can’t do that yet.

Yet. If not now, when WILL machines be capable of that? Many people believe that our divinely created minds are something that we could never possibly recreate ourselves, but the theory of the Technological Singularity suggests that it may occur as soon as twenty to thirty years from now. The history of technology is filled with examples of humans building machines that exceed our own capabilities. The industrial revolution came about because we figured out how to make machines that worked harder, faster, and more precisely than us at the most rudimentary of activities. This changed the word “manufacturing” from being something that happened by hand (manus = hand) to something that happened by machine. Most of us lived through the computer revolution where suddenly machines were able to calculate and handle information with previously unbelievable speed and accuracy. Back then we were amazed at a hand calculator’s ability to gobble down “24981167 / 87963168 =” in an instant.  The Singularity suggests that we’re approaching another revolution of that kind, one where machines are capable of doing all of the conceptual processing that makes humans special.

This series of (who knows how many) articles is going to explore this concept, figure out why they think it will happen, how likely it is, and what might occur if it did. The first part will explain the theory itself and why its proponents believe in it. Another one will describe what our current “thinking” technology is capable of, and a third section will describe what we can reasonably expect of a machine revolution, and what elements of various stories just plain doesn’t make sense. I hope that the readers will join in and share their ideas.

Pseudofacts

•September 20, 2009 • Leave a Comment

A pseudofact is a piece of information that, although it’s completely false, is still a whole lot more interesting than the truth. This means that it gets repeated preferentially over the truth. Because, for most of us, truth is little more than the information that we hear repeated most often, this can give many people a solid conviction that a completely bogus piece of information is The God’s Truth.

Let’s take the case of the 8×8 rule. The oft repeated statement is that you should drink eight eight-ounce glasses of water a day. This piece of wisdom is so ingrained into our medical wisdom that you’ll see it repeated on lists of the top ten simple ways to improve your health, and many family doctors will insist that it’s true. The problem is that it has no factual basis. I won’t re-hash the Snopes analysis of this one, but it all comes down to a study that said we lose about that much water in a day. It also completely ignores the part of the same study that says we get most of that back from moisture in the food we eat. Let’s face it, though, “drink eight glasses of water a day” is just a more interesting piece of information than “humans don’t need to think about their water consumption unless they’re doing something strenuous”, and you will probably keep hearing it for generations.

That’s a fairly benign example, selected specifically because it’s empirically provable and inoffensive. Pseudofacts are an example of memes with very strong memetic fitness out-competing their competition in natural selection. There are many things that contribute to this fitness, but the most heinous one is just plain boredom. Urban legends are an excellent example of things in this category. We’d rather repeat something fun and interesting than something boring because we ourselves don’t want to be boring.

The dangers of this can be seen when applied to something more topical and less inoffensive. Let’s pull a piece out of the evening news for this purpose. An employee for a company is caught on video telling a couple how to get funding for a house so they can run a brothel there. The employee tells the couple that she used to be a madam herself, and, oh, incidentally, that she shot her husband dead on cold blood. The public in general doesn’t know much about the company except that they often find funding for the housing of the underprivileged, and that they had something to do with trying to increase the minimum wage.

In general, this leads us in two different directions. The first is the idea that the employee is a nutcase who managed to get past the interviewing process.  The second is that the company has no problem with its employees running a prostitution ring when they should be working, and possibly acts as a clearing house for such illegal activities. For a moment, let’s set aside anything you may actually know about the organization. Which of these two possibilities is more interesting? Which one harbors the greatest potential for a juicy story? Which one of the two do you think would be more fun to tell your friends? If you answer the first one, you’ve spent too many years working in HR.

The result is a form of bias that can completely skew our perception of certain fields. Pseudofacts will cluster into bodies of evidence that support completely bogus perspectives. They get repeated often enough that anyone looking for correct information will get the same bad information over and over until they stop looking. In cases where accurate information isn’t actually available, it can permanently overwhelm the truth, resulting in legends and fables.

Question authority! (…in cooking?)

•September 7, 2009 • Leave a Comment

Have you ever had one of those “a-ha!” moments about your own psyche that gives you an unusual view into the workings of your own personality, as if a small window into your brain suddenly unshuttered itself and allowed you a rare view of how you work? I seem to experience more of these over the past few years and each time I’m struck by not only the revelation itself, but also the feeling that I should have known — and was perhaps on the verge of knowing — this about myself for some time.

Take, for example, my recent realization about cooking.

Specifically, I (finally) realized that I do not actually detest cooking in spite of the fact that I have spent years believing I like cooking about as much as I like cleaning litterboxes, getting lost while driving or going to a sports bar. Cooking, in my life, has long been a necessary evil, something I wished I could perpetually pay someone else to do for me. For years I tried to make the best of my dislike. I theorized I just didn’t know enough about cooking, wasn’t good enough at it, hadn’t yet amassed enough years of experience to have learned to like it. This theory prompted me to purchase cookbooks, search the internet for recipes, doubtfully request cooking tips from friends.

My tastes have often danced in the domain of gourmet vegetarian, but I found the recipes with 36 ingredients (many of which must be purchased at this or that speciality or ethnic store), requiring 18 distinct steps and a combination of 27 dishes and utensils (including a few I had never heard of and would probably never use again) to prepare intimidated and frustrated me. I could read one such recipe, lose all nerve and motivation and shamefully admit defeat by cooking some pasta and drenching it in a jar of pre-fab marinara.

Okay, I thought, I need to curb my tastes to accommodate my present skill set, patience and supplies. Cue more cookbook purchases, more internet recipe searches. Entering the least intimidating and quickest of these into my gorgeous and friendly recipe software (Now You’re Cooking). Categorizing all of the ingredients, generating an item-by-aisle shopping list and trying this again… Simpler recipes certainly made cooking easier, physically speaking, at any rate. And recipes with short (15-45 minute) prep and cook times satisfied my desire to be in the kitchen for as little time as possible. Limiting my grocery shopping to no more than one grocery store decreased the onerous nature of shopping for ingredients.

Yet, still…I HATED cooking. No, really. It took a tremendous effort to convince myself to shop, prepare, cook. My overwhelming disinclination to perform each of these steps lead me to associate with each of these suffer, agonize, writhe. In spite of what I felt were my best efforts, I had not yet succeeded in even abide, tolerate, acquiesce.

Incidentally, I have always resented authority.

Though I am oddly willing — even delighted — to anticipate and proactively meet the needs and expectations of others (making me quite popular with my clients), I bristle and bridle over being asked to do something. That said, my self-control and desire for income preservation are sufficiently strong that others rarely find this out about me. Rob is certainly aware of my, ahem, feature, but I’m not generally known in the circles through which I frequently pass as “that issue girl who constantly rebels against authority.” I keep THAT girl tightly inside, locked in a little cubical room, smacking violently from wall to wall like a self-propelled super ball. She rarely escapes, and when she does, I’m quick to retrieve and re-imprison her while cleaning up the devastation she left in her wake, apologetically offering excuses for my (her) extreme behavior.

But what on earth does this have to do with cooking?

This summer while Rob was out of work and after he had been diagnosed with extremely high cholesterol and triglycerides, I realized I would have to spend significantly more time cooking because our budget and his health couldn’t afford for us to eat out nearly so often. I largely tried to ignore my feelings of being abandoned by the life boat of restaurants and focused on a this “opportunity” to try cooking with new and different ingredients, including the meat substitutes I had largely avoided foisting on my family (I’m the vegetarian, so why should they suffer through nearly unpalatable fake meats? Vegetarianism is my choice, not theirs).

Predictably, I bought another cookbook. I performed more internet searches for yet more recipes.

Our friend Jason has been, for years now, encouraging me to try Quorn meat substitutes. I had tried the somewhat turkeyish flavored loaf-like product, serving it with buttery mashed potatoes, green beans, buttered croissants and vegetarian mushroom gravy. Rob and the kids ate it without complaint, but I would have to eliminate the mashed potatoes, butter and croissants (i.e., the yummiest parts of this meal) given Rob’s dietary restrictions.

A search of the Quorn website yielded little in the way of recipes I wanted to — and felt I could– manage. So I resorted to something I could handle: Chinese stir fry with Quorn chickenish chunks. I didn’t even need a recipe to prepare this meal, but we couldn’t eat it every night. Next I tried substituting fake chicken for pasta in some of the recipes I already knew, and this lead me to experiment with what was essentially Italian stir fry. Oddly I wasn’t minding this cooking much at all. Again no complaints, but we were all getting the slightest bit tired of stir fry of one flavor or another.

Slowly, as I began to trust this new fake meat product, I wondered how it would fare in traditional chicken recipes. What about cinnamon chicken, orange chicken, chicken and rice with cream of mushroom soup or curry chicken? I searched for more recipes, this time real chicken recipes, but I noticed something interesting: the very thought of following a recipe brought back the old bristling and bridling. Yet, if I could find answers to specific questions (what other standard ingredients appear in orange chicken recipes? How is coconut rice made? How much rosemary is added to a pound of rosemary fake-chicken? Do you need to dip fake chicken in egg before breading it in light cinnamon breading?), I could scrap the recipes and trust my own developed/developing cooking instincts. And, shockingly, I actually enjoy this new cooking method where I creatively combine ingredients without the safe — and stifling and dominating — presence of an official recipe.

Holy shit! I don’t hate cooking; I hate following instructions.

Since this realization I have been in the process of refining how I cook. I’m not the least bit inclined to purchase cookbooks. I found a great site with lots of trustworthy chicken recipes (RealSimple.com) and most of the recipes take significantly less time to prepare when substituting Quorn (though the olive oil or other cooking oil must generally be increased to avoid burning the fake chicken). Sometimes I follow a recipe (…mostly) to prepare something I’ve never cooked before, but for the most part I use the recipes to find out what spices combine well together in cultural foods with which I’m not very familiar, or to identify rough baking temperatures and times or preparation methods of new ingredients.

Honestly? Questioning authority has served me well throughout my life. Questioning cooking authority has been no different, and my approach is the same: find the elements of the authoritative direction that are beneficial or must be adhered to in order to avoid problems and rely on myself for the rest.