Thursday, January 31, 2013

More Easy Ways to Improve Your Writing

Punctuation matters.
A while ago, I wrote about some easy ways to pump up your writing.  It turned out to be a fairly popular post (though nowhere near as popular as my January 15 post on "How to Write an Opening Sentence," which got a bewildering 37,000 page-views). I thought I'd share a couple dozen more of my favorite tricks for forcing oneself to do a better job of writing. So here goes.
DISCLAIMER: All rules can be broken. Try sticking to them first.
  1. On a separate piece of paper or in a separate doc file, write down (as simply as you can) your main message; what you're wanting to say. Keep that piece of paper (or doc file) visible, off to the side, while you work.

  2. Avoid long sentences.

  3. Try varying your sentence lengths more. Paragraph lengths too.

  4. When in doubt, leave it out. Fewer words equals less revision.

  5. Don't hoard a good phrase until the ideal situation comes. Write full-out, all the time. Hold nothing back.

  6. "Don't tell me the moon is shining. Show me the glint of light on broken glass." (Chekhov)

  7. An ultra-short sentence at the beginning or end of a paragraph adds impact. Try it.

  8. Go back to the last thing you wrote and strip all the adjectives and adverbs out. How does it read now?

  9. Stop using "seamlessly." (Unless of course you're a seamstress.)

  10. Stop using "effectively." It adds nothing.

  11. Stop using "burgeoning." Trite. Lazy.

  12. Never use "whilst," "thusly," "ergo," or any other arch words that make you sound like an insufferable pedant.

  13. "Substitute 'damn' every time you write 'very'. Your editor will delete it and the writing will be just as it should be." (Mark Twain)

  14. Stop giving a shit what your English teacher thinks.

  15. Get on with it.

  16. If any sentence has you working on it longer than 60 seconds, rewrite it immediately as two or more short sentences. Recombine.

  17. Ask a friend to change three words in something you just wrote.

  18. Go back and edit something you wrote a year ago. Notice how much of it stinks.

  19. In thirty seconds or less, take three words out of whatever you just wrote. If you can't do it, the penalty is to take out six words.

  20. Learn to recognize, and stop using, overused expressions. A good rule is: If you've heard it before, don't use it. Things like "hell bent," "all hell broke loose," "[adjective] as the dickens," "so quiet you could hear a pin drop," etc. will creep into your writing while you're not looking. Go back and find such atrocities. Rip them out. Set ablaze. Bury.

  21. Specificity counts. Your friend doesn't drive a car; she drives a tired-looking red Camry. It's not a "sweltering hot day." It's the kind of summer day that makes even pigeons sweat. The gunman didn't have a gun; he had a .45-caliber semi-automatic Glock. See the difference?

  22. Don't use the same adjective, adverb, or pronoun more than once in the same paragraph (unless of course somebody is holding a .45-caliber Glock to your face). See how long you can hold out before using any word a second time. Think of synonyms, alternative phrasings, pseuodnyms, creative euphemisms, indirect references, colloquialisms, never-before-heard coinages -- anything except the same old word, repeated.

  23. Elmore Leonard once said: "If it sounds like writing, I rewrite it."

  24. Leonard also said to "leave out the parts people skip."

  25. Read good writing.

Wednesday, January 30, 2013

Are Placebos Really Sugar Pills?

Is this really what a placebo amounts to?
Over the weekend I was reading some medical studies involving placebos. The experimental protocols were of the standard double-blind type in which a control group gets a placebo without either the group or their doctors knowing it.

One of the studies involved a medical condition for which sugar (supposedly the main ingredient of placebos) might be anything but biologically inert, and I thought to myself "Okay, certainly the doctors would know that and would choose a sugar-free placebo for the study." But when I read the study I couldn't find an ingredients list for the placebo. Maybe it was a sugar pill. Maybe not. We'll never know.

Then I started to wonder: Who makes placebos? Where do they come from? Is there a widely used "standard placebo" that scientists typically use in studies? What does it contain, exactly? And so on.

Let me skip right to the punch line. It turns out the drug companies (the very people who perform and/or fund the efficacy studies FDA relies on when granting new drug approvals) manufacture their own placebos -- and aren't required to list the ingredients.

One reason this is so disturbing is that drug companies are allowed to use (and do increasingly use) active placebos in their studies. An active placebo is one that is biologically active, rather than inert.

"But wait," you're probably saying. "Isn't the whole point of a placebo that it's biologically inert, by definition?"

You'd think so. But you'd be wrong. Active placebos are designed to mimic the side-effects of drugs under study. So for example, if a new drug is known (or thought by the drug company) to produce dry mouth, the drug company might use a placebo containing ingredients that produce dry-mouth. That way, of course, they can say things in their ads like "[drug name] has a low occurrence of side effects, such as dry mouth, which occurred about as often as they did with placebo."

In a 2010 study by Beatrice A. Golomb, M.D., Ph.D. (and colleagues), published in the Annals of Internal Medicine (19 October 2010;153(8):532-535), some 150 recent placebo-controlled trials were examined to see how many of them listed placebo ingredients. Only eight percent of trials using placebos in pill form (the majority of trials) disclosed ingredients. Overall, three quarters of studies failed to report placebo ingredients.

One of the trials in the Golomb study involved a heart drug. Over 700 patients participated, so it was a good-sized study by any definition. In a subgroup of patients that had recently experienced a heart attack, the drug in question (clofibrate) was no better than placebo in extending patients' lives. But the placebo was actually quite effective, reducing the group's mortality rate by more than half. However: the placebo was olive oil. And olive oil is known to fight heart disease.

Carelessly chosen placebos can also have a harmful effect. Dr. Golomb tells of receiving a call from HIV researchers whose drug study had to be aborted because the placebo group was "dropping like flies." The placebo contained lactose. It's well known that lactose intolerance is higher for HIV patients than for the general population.

It's inconceivable (to me, at least) that there are no laws requiring drug companies to list placebo ingredients. The fact that drug companies can formulate their own placebos (some of which are biologically active) and not list the ingredients, in research aimed at getting approvals from FDA, is shocking and outrageous.

It's quite obvious that researchers (whether associated with drug companies or not) need to agree on a standard placebo of some kind (or at least standards for placebos).

FDA needs to review its policies on placebos and either outlaw "active placebos" or rigorously define acceptable conditions for their use.

When I say FDA needs to review its policies on placebos, I'm referring to such (ongoing) practices as letting drug companies de-enroll study subjects from studies based on individuals' sensitivity to placebos. (Drug makers usually begin a study with a two-week "washout period" during which time potential subjects take either a placebo, or nothing. Subjects who respond to the placebo can be summarily taken out of the study before it begins in earnest.)

The current anarchy that prevails with regard to placebos calls into question the reliability not just of drug-company research but of virtually every placebo-controlled study ever done. Which is a hell of a thing to have to say, or even think about. In fact it's nauseating.

Someone, please: Pass me the Tic-Tacs.

Tuesday, January 29, 2013

Funny Metaphors

Humor is a good thing in metaphors. But not unintentional humor.

Here are some stupendously warped metaphors and similes from student essays. Read 'em and weep.

1. Her vocabulary was as bad as, like, whatever.

2. The ballerina rose gracefully en pointe and extended one slender
leg behind her, like a dog at a fire hydrant.

3. Hailstones leaped from the pavement, like maggots when you
fry them in hot grease.

4. The revelation that his marriage of 30 years had disintegrated
because of his wife's infidelity came as a rude shock, like a surcharge
at a formerly surcharge-free ATM.

5. He spoke with the wisdom that can only come from experience, like a
guy who went blind because he looked at a solar eclipse without one of
those boxes with a pinhole in it and now goes around the country
speaking at high schools about the dangers of looking at a solar eclipse
without one of those boxes with a pinhole in it.

6. The little boat gently drifted across the pond exactly the way a
bowling ball wouldn't.

7. She grew on him like she was a colony of E. coli and he was
room-temperature Canadian beef.

8. She had a deep, throaty, genuine laugh, like that sound a dog makes
just before it throws up.

9. It hurt, the way your tongue hurts after you accidentally staple it
to the wall.

10. From the attic came an unearthly howl. The whole scene had an
eerie, surreal quality, like when you're on vacation in another city and
Jeopardy comes on at 7:00 p.m. instead of 7:30.

11. John and Mary had never met. They were like two hummingbirds who
also had never met.

12. Her hair glistened in the rain like a nose-hair after a sneeze.

13. The plan was simple, like my brother-in-law Phil. But unlike Phil,
this plan just might work.

14. The young fighter had a hungry look, the kind you get from not
eating for a while.

15. McBride fell 12 stories, hitting the pavement like a Hefty bag
filled with vegetable soup.

16. He was as lame as a duck. Not the metaphorical lame duck, either,
but a real duck that was actually lame. Maybe from stepping on a land
mine or something.

17. She walked into my office like a centipede with 98 missing legs.

18. He was deeply in love. When she spoke, he thought he heard bells,
as if she were a garbage truck backing up.

19. Even in his last years, Grandpappy had a mind like a steel trap,
only one that had been left out so long, it had rusted shut.

20. He fell for her like his heart was a mob informant and she was the
East River.

Monday, January 28, 2013

How to be a Master of Metaphor

Nothing makes a piece of writing sparkle like a good metaphor. Well-crafted metaphors and similes are the RPGs of diction; destroyers of boredom, exploding munitions of meaning.

What is "metahpor"? Term.ly defines it as "a figure of speech in which an expression is used to refer to something that it does not literally denote in order to suggest a similarity." I like to think of it in simpler terms: enlisting a vivid image in service of description. Diction's lubricant. The rib-spreader that exposes a writer's true heart.

What is "simile"? A metaphor in drag; a metaphor with the word "like" in it. Nothing more.

A simile is like a white lie; you're telling the reader that Thing A is like Thing B, even though in a literal sense, the two are not the same. A metaphor, on the other hand, is a pretend-lie. You're calling one thing something else entirely. Stephen Colbert explains it this way: "What's the difference between a metaphor and a lie? Okay, I am the sun, you are the moon. That's a lie. You're not the moon."

What makes a good metaphor (or simile) good?

  • Simple and clear: A good metaphor is vivid, useful, concise, and (when successful) memorable. An elaborate, baroque, overworked, or otherwise wordy metaphor topples under its own weight.
  • Highly visual, if possible: Concrete language that evokes a clear mental image is always a good idea, for any kind of writing.
  • Original: Not lame, not something anybody has used before.
  • Unexpected, perhaps even shocking: A good metaphor doesn't leave the reader dumbstruck; it leaves her Tasered in the nipples. It's a subversion of expectation.
  • Not mixed: An inconsistent image destroys, not augments, meaning.
  • Parallel in tone with whatever you're describing: If you're describing weird, produce a metaphor that's weird. If you're describing upbeat, be sure the metaphor is upbeat. You're not just denoting imagery; you're conveying tone. Or should be.
  • Entertaining: The reader should smile, maybe even laugh.

Sometimes it doesn't hurt to inject a bit of absurdity. One time, I overheard somebody talking about the dangerously worn-out tires on his car. He spoke of tires that "were so bald you could drive over a dime and tell if it was heads or tails." I'd never heard that expression before. It stayed with me.


Examples of Hackneyed Metaphors and Similes

  • "[to] rise head and shoulders [above something]": Thoroughly overused.
  • "Music to my ears": Horrible.
  • "Two peas in a pod": Offal.
  • "Heart of stone": Nauseating.
  • "The light of my life": Cloying.
  • "Raining cats and dogs": How about raining llamas and dromedaries? Anything but housepets.
  • "[our culture is a] melting pot." How about "a sumptuous ethnic ragout"?
  • "Sank like a stone": The essence of trite.
  • "[He or she turned] white as a sheet." OMG please no.
  • "He was awkward; all knees and elbows": No longer original. Try something like: "He was awkward, all knees and elbows, like a newborn giraffe."


Metaphor: Good Examples

  • "Advertising is the rattling of a stick inside a swill bucket." George Orwell
  • "Art washes away from the soul the dust of everyday life." Pablo Picasso
  • "Fill your paper with the breathings of your heart." William Wordsworth
  • "Courage is grace under pressure." Ernest Hemingway
  • "The night wind was a torrent of purple darkness." Unknown
  • "I tom-peeped across the hedges of years, into wan little windows." Vladimir Nabokov
  • "A bland agenda. Political meatloaf." (Yours truly)
  • "A wicker basket weighed down with half-rotted ideas." (Yours truly)


Simile: Good Examples

  • "The air smelled sharp as new-cut wood, slicing low and sly around the angles of buildings." Joanne Harris
  • "The dust lifted up out of the fields and drove gray plumes into the air like sluggish smoke." John Steinbeck
  • "Elderly American ladies leaning on their canes listed toward me like towers of Pisa." Vladimir Nabokov
  • "There was a quivering in the grass which seemed like the departure of souls." Victor Hugo
  • "His face was deathly pale, and the lines of it were hard like drawn wires." Bram Stoker
  • "To live anywhere in the world today and be against equality because of race or colour is like living in Alaska and being against snow." Unknown

It's surprising how many people quote Margaret Mitchell's little suck-ass line about Scarlett meeting Rhett as an example of a beautiful simile: "The very mystery of him excited her curiosity like a door that had neither lock nor key." First of all, "the very [sight, mystery, image, etc.] of [something]" is a repugnantly arch construction. But more to the point: A door that has neither lock nor key is just your average door, isn't it? Most doors have neither lock nor key. It seems Scarlett got easily excited by a cheap, lockless door. (My kind of woman.)

Tomorrow, I'm going to continue on this subject with some examples of truly humorous metaphors and similes drawn from that inexhaustible well of preposterous nonsense, student essays. Don't miss tomorrow's post. You'll be sorry as a whore in church if you do.

Saturday, January 26, 2013

How to Use Webfonts in Blogger

Lately I've been experimenting with Google Webfonts, which is a terrific way to get started with webfont technology. The fonts are free and using them is a snap. Scroll down for a few sample fonts.

Once you pick out the fonts you want to use, just insert a line like the following in the <head> section of your template. Note that this is all one line (ignore the wrapping):

<link href='http://fonts.googleapis.com/css?family=Arbutus+Slab|Belgrano|Tinos:400,400italic|Ovo|Arapey:400italic,400|Alegreya:400italic,400,700|Ledger|Adamina|Andada' rel='stylesheet' type='text/css'>

Right after that line, insert some style classes as follows:

<style>
.ovo  { font-family: Ovo, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.arbutus  { font-family: Arbutus+Slab, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.tinos  { font-family: Tinos, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.arapey  { font-family: Arapey, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.alegreya  { font-family: Alegreya, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.ledger  { font-family: Ledger, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.adamina  { font-family: Adamina, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
.andada  { font-family: Andada, Arial, serif; font-weight: 400;font-size:12pt;font-color:black; }
</style>


Note that you can put any name you want after the dot. For example, instead of ".ovo" you could name the class ".fancy" or ".whatever" or ".ovo12pt," but for maximum browser compatibility, don't start the class name with a number. For instance, don't use ".12ptOvo."

Save your template, and you're ready to use the fonts. How? One way is to enclose a section of text in a <span> that invokes the class you want, like this:

<span class="ovo">
Text goes here. Blah blah blah.
</span>


Google provides hundreds of free fonts (again, see Google Webfonts for details), and many of them are outstanding. The serif fonts are less numerous and less varied than the sans-serif fonts Google provides, and there are no convincing "typewriter fonts" (which is a serious omission, IMHO), but you'll find no shortage of headline fonts. Check the character sets carefully, in any case, because many of the fonts provide only a basic Latin alphanumeric character set.

For an even greater variety of fonts, be sure to check out Adobe's Typekit site.

Here are some of my personal favorites from the Google collection:


Ovo
Nicole Fally's Ovo was inspired by a set of hand-lettered caps seen in a 1930s lettering guide. A medium-contrast serif font, Ovo has a noticeable yet agreeable linearity, with crisp features that provide good (though not excellent) legibility at a variety of sizes. This sample is 12pt and shows that the font itself is natively smaller than most fonts. Ovo's serifs and crossbars are slanted and adhere to a single common angle. This makes for a distinctive font but can become intrusive in long passages of text. Ovo is thus (arguably) better used for short and medium-length spans of text.


Belgrano
Belgrano is a slab serif type, initially designed for printed newspapers but now adapted for use on the web. It features coarse terminals and larger counterforms that allow it to work well in smaller sizes. (This sample is 10pt.) Letters of the alphabet that are closed but rounded ('o', 'b', 'p', etc.) tend to circumscribe slightly more white space in Belgrano than in fonts like Alegreya, giving a more open feel to long runs of text.


Tinos
Tinos was designed by Steve Matteson as a refreshingly stylish alternative to Times New Roman. It is metrically compatible with Times New Roman (giving about the same number of words per page, for example), even though it looks more condensed. Tinos offers good onscreen readability characteristics and comes with a superbly crafted italic version. In larger sizes, it quickly loses its "condensed" feel.


Arapey
Eduardo Tunni's first sketches of this typeface were made during a vacation in Arapey, a small town in the north of Uruguay, hence its name. While the font is reminiscent of Bodoni, the soft lines and finishes give the text a smooth, distinguished feeling. The font tends to look best at 12pt or larger sizes. This sample is 13pt.


Alegreya
Alegreya was chosen as one of 53 "Fonts of the Decade" at the ATypI Letter2 competition in September 2011. It was also selected in the 2nd Bienal Iberoamericana de Diseño competition held in Madrid in 2010. Originally intended for literature, Alegreya is more angular than Arapey and conveys a subtle variegation that facilitates the reading of long texts. The italic version shows just as much care and attention to detail as the roman version. There is also a Small Caps sister family. The font is natively somewhat small (this is a 12pt sample).


Adamina
An excellent general-purpose serif font for long-form text projects, Adamina was specifically designed for readability at small sizes. As a result, the x-height is increased and complex features (of the kind that contribute to contrast) are kept more controlled. One-sided flaring and asymmetrical serifs provide a pleasant reading experience; the font never feels intrusive. This is an 11pt sample with letter spacing increased by 0.01em and word-spacing set to 0.1em (because otherwise it can look a bit tight, especially at small point sizes).


Ledger
Much of Ledger's charm, as with Garamond, comes from its relatively heavy downstroke thickness compared to the almost frail stroke thickness at the tops of curved letters like 'o' and 'p'. That and the font's slightly more open character make Ledger a good alternative to Garamond-family fonts in larger sizes (though not smaller sizes). The letter forms feature a large x-height, good stroke contrast, and elegant wedge-like serifs and terminals, yielding a "distinguished-looking" font, again in the spirit of Garamond except with somewhat better screen readability.


Andada
Designed by Carolina Giovagnoli for Huerta Tipográfica, Andada shares many of Adamina's most agreeable features but, by virtue of being a slab-serif design, lacks the more refined flourishes (in ascenders and descenders, for example) of Adamina. Perhaps precisely because of the less-adorned design, many readers will prefer Andada over Adamina (or "Garamond-like" fonts) for long passages of text. 

Note: If you found this post useful, please tweet it and/or share the link with a friend. Thanks!

Friday, January 25, 2013

Can you name these famous authors?

If you consider yourself a true bibliophile, here's a quick test for you. At right are photos of 18 famous authors (of fiction, although many also wrote nonfiction), from the 19th and 20th centuries. Eleven wrote solely or primarily in English; seven wrote in a language other than English. As far as I know, only one of these people is still alive.

For this particular test, I'm including only male authors. In a future post, I'll do female authors only. That'll be much more challenging.

Scoring works like this. There are 18 authors. Give yourself five points for every correct answer (that's a possible total of 90 points), then give yourself a free 10-point bonus if you end up not using any of the hints shown below. If you do use the hints (any of them), you don't get the 10-point bonus.

Check the bottom of the page to see how you did. Good luck!

Hints

1. Tried to study engineering; became a wordsmith instead. Dead at 44 after writing a dozen novels plus scores of stories, poems, essays.

2. Left school to work in a factory after his father was thrown into debtors' prison.

3. The godfather of futurist steampunk.

4. Workaholic pioneer of literary realism.

5. Less than three years after winning the 1957 Nobel Prize for Literature, he died in a car wreck at age 46.

6. Novelist, short story writer, social critic, philanthropist, essayist, and 1929 Nobel Prize laureate. Known for epic irony and ironic epics.

7. He won the 1962 Nobel Prize for literature for his "realistic and imaginative writing, combining as it does sympathetic humor and keen social perception."

8.In 1851, after running up heavy gambling debts, he went with his older brother to the Caucasus and joined the army. Then he began writing.

9. In addition to his famous dystopian novel, he wrote literary criticism, poetry, and polemical journalism. Heavy smoking did not help his tuberculosis.

10. Known for his prescience.

11. This Prague-born author's social satire was as grotesque as it was moving.

12. He went from unknown to famous to unknown in the space of his 72-year-long life.

13. Wait. You don't recognize Достое́вский? He and other members of his literary group were arrested, sentenced to death, subjected to a mock execution, then given four years of hard labor in Siberia.

14. Poet, painter, and master of the Bildungsroman. He received the Nobel Prize in 1946.

15. Winner of the 1954 Nobel Prize for Literature. Dead in 1961 at age 61.

16. Better known in Kashmiri as अहमद सलमान रुशदी. He started out as an ad copywriter with Ogilvy & Mather.

17. His major opus was reportedly typed as a single paragraph on a 120-foot-long scroll of paper.

18. While genuinely a gifted writer, he became famous mainly for being famous. Many think of him as having pioneered the "nonfiction novel."


Answers

1. Robert Louis Stevenson. 2. Charles Dickens. 3. H.G. Wells. 4. Honoré de Balzac. 5. Albert Camus. 6. Thomas Mann. 7. John Steinbeck. 8. Leo Tolstoy. 9. George Orwell. 10. Arthur C. Clarke. 11. Franz Kafka. 12. Herman Melville. 13. Fyodor Dostoyevsky. 14. Herman Hesse. 15. Ernest Hemingway. 16. Salmon Rushdie. 17. Jack Kerouac. 18. Truman Capote.


Scoring
(5 points per correct answer plus 10 points if you didn't use Hints)

90 to 100: Master bibliophile. Congratulations.
80 to 89: Excellent. You've been paying attention.
70 to 79: Solid. You're no literary dummy.
60 to 69: Acceptable. It's possible you actually earned your degree.
50 to 59: Poor. You've been reading the wrong stuff.
40 to 49: Were you not paying any attention in school?
below 40: Give your degree back. You were wasting everyone's time.

Thursday, January 24, 2013

When is Surface-Deep Knowledge Good Enough?

As the dimensionality of a (hyper)sphere increases,
more and more of the volume is near the surface. The pink
and red portions of the (hyper)spheres shown in cross-section
here each contain 50% of the volume. 'N' is the dimensionality.
The common supposition is that when your knowledge of something is "surface deep," it's tantamount to knowing nothing. But is that always true? What if you understand many facets of a complex topic, some perhaps at deep level, but you lack formal training in those facets? Does it mean your understanding rounds off to zero? Hardly.

Here's one way to look at it. Suppose the subject domain (whatever it happens to be) can be represented, conceptually, as a sphere. Everything there is to know about the subject maps to some region inside the sphere. "Total knowledge" represents the total contents (the total volume) of the sphere.

If the sphere is three-dimensional, half the volume is contained in an inner sphere that has 79.37% of the radius of the overall sphere. (Stay with me on this for a moment, even if you're not a math person.) For purposes of discussion, we'll consider a sphere of radius 1.0 (a so-called "unit sphere"). By comparison to a unit sphere, a sphere that has a radius of 0.7937 contains half the volume of the unit sphere. The reason for this is that volume grows as the cube of the radius, and the cube root of 0.5 is 0.7937. So that's what I mean when I say that the innermost 79.37% of a sphere (any 3-dimensional sphere), as measured in terms of its radius, contains 50% of the volume of the sphere. The outermost 20.63% of the radius bounds the outermost 50% of the sphere's volume.

This is summarized in the topmost portion of the accompanying graphic, where we see the cross-section of a sphere with the innermost half of the volume shaded in pink and the outermost half shaded in dark red. The boundary between the two half-volumes starts at a point on the radius that's 79.37% of the way from the center to the surface.

Now suppose we consider a hypersphere of dimensionality 10. That's the middle sphere of the graphic (the one that has "N = 10" next to it). The volume of such a sphere grows as the tenth power of the radius. Therefore the inner and outer half-volumes are delimited at a point on the radius that is 93.3% of the way from the sphere's center (the tenth root of 0.5 is 0.93303). Again, the graphic depicts the outer half-volume in dark red. Notice how much thinner it is than in the top drawing.

If we step up the dimensionality to N = 30, the half-volumes are delimited at the 97.16%-radius point. Half the volume of the hypersphere is contained in just the outer 2.84% of radius.

You can see where I'm going with this. As the dimensionality N approaches infinity, all of a hypersphere's volume is contained in the surface.

So if the dimensionality of a problem is large enough (and you're willing to buy into the simple "volume is knowledge" model set forth earlier), surface-deep knowledge can be quite valuable indeed.

The next time someone tells you your knowledge of something is only "surface-deep," consider the number of dimensions to the problem, then tell the person: "Dude. This is an N-dimensional problem, and since N is high in this case, surface-deep knowledge happens to be plenty. Let me explain why . . ."

Wednesday, January 23, 2013

Creating Your Own Memorization Tricks

Better memory is something almost everybody wants. How much time have you spent recovering passwords to websites? Trying to remember what you were supposed to buy at the supermarket? Trying to remember phone numbers? Trying to remember where you put the damn keys? Trying to remember where you stashed the access code to your wireless connection?

If you do a survey of memorization tricks, you quickly find that they all rely on the same few sorts of strategies. One common strategy is to connect a picture with whatever you're trying to remember. Another is to connect an emotion. When you're trying to memorize more than one thing at the same time (such as a name plus a face, or several numbers in a sequence), combine multiple "lookup techniques" to make a story.

All of these strategies involve making connections between disparate object-types (e.g., associating an image with a number), in hopes of enlisting more than one part of your brain in memorizing whatever it is you're trying to memorize. Once you know that, it's fairly easy to make up your own memorization tricks.

The key is to take advantage of the fact that your brain stores information in different ways. One part of your brain is devoted to face recognition (and facial memory). Another part is devoted to emotional memory. We also have distinct ways of remembering shapes and imagery; sounds; vocabulary and language-based meanings; letters, numbers, or glyphs; mathematical relationships; kinesthetic experiences ("muscle memory"); and a bunch of other stuff I can't remember right now. (Bwa-ha-ha.)

The key is to tie two or more of these memory modalities together.

How many times have you been in a phone conversation where someone suddenly gives you a phone number when you're not ready to copy it down? I made up my own memory technique for that. I take the first portion of the phone number and memorize the visual image of it (the picture of it, as if it's a photo of the number projected on a wall). Then I recite the last portion of the phone number (either silently or out loud) repeatedly, like a mantra, until it's part of my mouth's muscle memory. I don't just "recite" the number in a monotone voice, I actually make it a sing-songy, semi-musical ditty, the way you often hear phone numbers sung in radio commercials.

I find that it's easy to hold a "photographic image" of a number in one part of my brain and a singy-songy spoken (or sung) number in another part of my brain, at the same time. Many math savants (people who can tell if a large number is prime, or who can multiply any two numbers in their head, etc.) report that they rely on techniques involving seeing the shapes of numbers. This is often useful when trying to memorize the "photo image" of a number. E.g., 413 is sharp and pointy on the left (it has the shape of the prow of a ship) but round like two buttocks on the right.

Sometimes I use a different technique for phone numbers. (This is going to sound ridiculous.) Suppose the number I want to memorize is 326-5918. This is a fairly difficult number to remember because no two digits are the same. First, quickly memorize the 326 part by rote. (If I suspect I'll forget the '326' part, I'll go a step further and try to find a mathematical crutch that will help me. In this case: 3 times 2 is 6.) For the 5918 part, I tell myself "I feel like I'm 59 years old, but I want to feel like I'm 18." Or I make up a fantastical little story: "When I'm 59 years old I'll meet someone who's 18." (Yeah, right.) If I'm on the phone with a customer service representative: "Holy crap, she must think I'm 59 years old, but she sounds like she's 18!"

I'm still bad with faces and names. The experts say to transform a facial feature into an object, then create a bizarre story about the object that's easy to remember. So for example, suppose you meet someone named Cory Zimmerman. You might make the (absurd) realization that the person's neck reminds you of an apple core (it looks Core-y). Then you might imagine that his zipper is down (Zipperman). A person with an apple core for a neck, with his zipper down, is laughable enough to remember. Arguably.

At any rate, now you know how to invent your own memory techniques. Take any two modalities of learning (muscle memory, image memory, math-relationships memory, etc.) and connect them together, then overlay with a story. The more absurd the story, the better. Remember that.





Tuesday, January 22, 2013

This is How Wrong Kurzweil Is

Yesterday I criticized Ray Kurzweil's prediction (made in a Discover article) of the arrival of sentient, fully conscious machine intelligences by 2029. I'd like to put more flesh on some of the ideas I talked about earlier.

Because of some of the criteria Kurzweil has set for sentient machines (e.g. that they have emotional systems indistinguishable from those of humans), I like to go ahead and assume that the kind of machine Kurzweil is talking about would have fears, inhibitions, hopes, dreams, beliefs, a sense of aesthetics, understanding (and opinions about) spiritual concepts, a subconscious "mind," and so on. Not just the ability to win at chess.

Microtubules appear to play a key role in long-term memory.
I call such a machine Homo-complete, meaning that the machine has not only computational capabilities but all the things that make the human mind human. I argued yesterday that this requires a developmental growth process starting in "infancy." A Homo-complete machine would not be recognizably Homo sapiens-like if it lacked a childhood, in other words. It would also need to have an understanding of concepts like gender identity and social responsibility that are, at root, socially constructed and depend on a complex history of interactions with friends, parents, relatives, teachers, role models (from real life, from TV, from the movies), etc.

A successful Homo-complete machine would have the same cognitive characteristics and unrealized potentials that humans have. It would have to have the ability not just to ideate, calculate, and create, but to worry, feel anxiety, have self-esteem issues, "forget things," be moody, misinterpret things in a characteristically human way, feel guilt, understand what jealousy and hatred are, and so on.

On top of all that, a Homo-complete machine would need to have a subconscious mind and the ability to develop mental illnesses and acquire sociopathic thought processes. Even if the machine is deliberately created as a preeminently "normal," fully self-actualized intelligence (in the Maslow-complete sense), it would still have to have the potential of becoming depressed, having intrusive thoughts, developing compulsivities, experiencing panic attacks, acquiring addictions (to electronic poker, perhaps!), and so on. Most of the afflictions described in the Diagnostic and Statistical Manual of Mental Disorders are emergent in nature. In other words, you're not born with them. Neither would a Kurzweil machine be borne with them; yet it could acquire them.

We're a long way from realizing any of this in silicon.

Kurzweil conveniently makes no mention of how the human brain would be modeled in a Homo-complete machine. One presumes that he views neurons as mini-electronic devices (like elements of an electrical circuit) with firing characteristics that, once adequately modeled mathematically, would account for all of the activities of a human brain under some kind of computer-science neural-network scheme. That's a peculiarly quaint outlook. Such a scheme would model the brain about as well as a blow-up doll models the human body.

Current mathematical models are impressive (see [3] below, for example), but they don't tell the whole story. It's also necessary to consider the following:

  • Neurotransmitter vesicle release is probabilistic and possibly non-computable.

  • Beck and Eccles [2] have suggested that quantum indeterminacy may be involved in consciousness.

  • It's likely that consciousness occurs primarily in dendritic-dendritic processing (about which little is known, except that it's vastly more complex than synapse-synapse processing) and that classical axonal neuron firing primarily supports more-or-less automatic, non-conscious activities [1][7].

  • Substantial recent work has shown the involvement of protein kinases in mediating memory. (See, for example [8] below.) To model this realistically, it would be necessary to have an in-depth understanding of the underlying enzyme kinetics.

  • To model the brain accurately would require modeling the production, uptake, reuptake, and metabolic breakdown of serotonin, dopamine, norepinephrine, glutamate, and other synaptic substances in a fully dynamic way, accounting for all possible interactions of these substances, in all relevant biochemical contexts. It would also require modeling sodium, potassium, and calcium ion channel dynamics to a high degree of accuracy. Add to that the effect of hormones on various parts of the brain. Also add intracellular phosphate metabolism. (Phosphates are key to the action of protein kinases, which, as mentioned before, are involved in memory.)

  • Recent work has established that microtubules are responsible not only for maintaining and regulating neuronal conformation, but in addition, they service ion channels and synaptic receptors, provide for neurotransmitter vesicle transport and release, and are involved in "second messenger" post-synaptic signaling. Moreover, they're believed to affect post-synaptic receptor activation. According to Hameroff and Penrose [5], it's possible (even likely) that microtubules directly facilitate computation, both classically and by quantum coherent superposition. See this remarkable blog post for details.

Kurzweil is undoubtedly correct to imply that we'll know a great deal more about brain function in 2029 than we do now, and in all likelihood we will indeed begin to see, by then, machines that convincingly replicate certain individual aspects or modalities of human brain activity. But to say that we will see, by 2029, the development of computers with true consciousness, plus emotions and all the other things that make the human brain human, is nonsense. We'll be lucky to see such a thing in less than several hundred years—if ever.


References

1. Alkon, D.L. 1989. Memory storage and neural systems. Scientific American 261(1):42-50.

2. Beck, F. and Eccles, J.C. 1992. Quantum aspects of brain activity and the role of consciousness. Proc. Natl. Acad. Sci. USA 89(23):11357-11361.

3. Buchholtz et al., Mathematical Model of an Identified Stomatogastric Ganglion Neuron, J. Neurophysiology, 67:2 February 1992.

4. Hameroff S 1996. Cytoplasmic gel states and ordered water: Possible roles in biological quantum coherence. Proceedings of the Second Advanced Water Symposium, Dallas, Texas, October 4-6, 1996. http://www.u.arizona.edu/~hameroff/water2.html

5.Hameroff, S.R., and Penrose, R., (1996a) Orchestrated reduction of quantum coherence in brain microtubules: A model for consciousness. In: Toward a Science of Consciousness, ­ The First Tucson Discussions and Debates, S.R. Hameroff, A. Kaszniak and A.C. Scott (eds.), MIT Press, 1996, Cambridge, MA. Also published in Mathematics and Computers in Simulation 40:453­480.

6. Toward a Science of Consciousness II: The 1996 Tucson Discussions and Debates, Stuart Hameroff, Alfred Kaszniak, and Alwyn Scott, Editors. MIT Press, Cambridge MA 1998.

7. Pribram, K.H. Brain and Perception Lawrence Erlbaum, New Jersey 1991.

8. Rovelli C, Smolin L 1995a. Discreteness of area and volume in quantum gravity. Nuclear Physics B 442:593-619.

9. Shema et al., Rapid Erasure of Long-Term Memory Associations in the Cortex by an Inhibitor of PKM, Science, 317:5840 pp. 951-953, August 2007.

Monday, January 21, 2013

What Kurzweil Is Forgetting

In the Twilight Zone episode "The Lonely,"
Jean Marsh plays the robotic companion of
Jack Warden (who is stranded on an asteroid).
Ray Kurzweil has written a stimulating essay for Discover titled "How Infinite in Faculty," in which (surprise, surprise) he predicts that by 2029, it will be possible to create a machine that shows every evidence of being "conscious." The full text of the piece is online here.

According to Kurzweil, such machines will "exhibit the full range of familiar emotional cues; they will make us laugh and cry" [note: my Windows Vista laptop already makes me cry] "and they will get mad at us if we say we don't believe they are conscious."

I don't know if Kurzweil wrote the teaser line just under the headline of the article ("Future machines will exhibit a full range of human traits and help answer one of science's most important questions: What is consciousness?"), but it seems likely he had final veto power over the article's final presentation, so I presume Kurzweil stands by the "full range of human traits" bit.

I think Kurzweil has conveniently overlooked a lot of what we know about humans and their "full range" of traits.

Human beings don't suddenly wake up as adults, with complex emotions, personality, and learned behaviors. The kind of machine Kurzweil is talking about doesn't start out as an infant and go through the complex parental and social interactions (and stages of neurolgical development) that a very young human goes through. Therefore the machine would not magically exhibit human adult traits straight out of the box. A human's emotional self is the outcome of years of development (involving responses to things like sibling rivalry, learned gender-based behaviors, the side-effects of parental divorce, bullying in school, physical or psychological abuse by relatives, miscellaneous traumas and triumphs big and small, the ebb and flow of self-esteem issues throughout early life into young adulthood, all the messy hormone-influenced issues attendant to puberty, and so on).

Any sociologist will tell you that much of "who we are" is determined by the socially constructed norms of the society we live in. Thoughts and behaviors based on social norms aren't "data" that you can feed into a machine. It takes years of development, starting in infancy, to learn socially constructed concepts and integrate them into one's nervous system in a way that accords with (and simultaneously produces) an individual's personality traits around sex and gender, guilt, responsibility, ethics and morals, one's sense of "justice," self esteem, political philosophy, etc. (That's an absurdly short list of socially constructed phenomena, by the way.) These things don't spring up fully formed in an individual in a vacuum, the way they would have to in a "fully conscious" and emotionally complex Kurzweil machine.

For a machine to be Homo-complete (as in Homo sapiens), meaning that its responses to verbal and other stimuli are indistinguishable from those of a human being, the machine would have to be capable of starting "life" as a child and experiencing the developmental processes that allow a child to become an adult. (I don't subscribe to the idea that the complete neural state of an adult can simply be captured digitally and loaded into a machine to yield a Homo-complete pseudo-being. Nothing even close to that is going to be possible by 2029.) The machine would need to be capable (in theory, at least) of "growing up" to be male or female, if for no other reason than that the male and female human brains are anatomically and functionally different. Which gender will Kurzweil choose to replicate?

The machine would also have to be capable of developing psychological disorders, including PTSD if subjected to trauma, depression and other mood disorders (assuming the machine is capable of having moods, which Kurzweil certainly seems to be assuming), phobias and related severe anxiety, addiction behaviors, delusions, paranoia, mania, borderline personality disorder, OCD, cognitive issues, dissociative disorders, factitious disorders, and a range of other disorders (basically everything in the Diagnostic and Statistical Manual), possibly including schizophrenia.

Any Homo-complete machine would also have to admit the possibility of sociopathic tendencies. (Many science fiction movies already portray cybernetic humans as homicidal, so perhaps this is a given.)

Kurzweil might argue "We wouldn't deliberately build in any pathologies of any sort." Yes, but many psychological disorders (and quite a few criminal behaviors) are emergent in nature. People aren't born with them. Neither will Kurzweil's machines be born with them.

Of course, a Kurzweil machine will have to be capable of sight, hearing, and touch, lest it be "born" into a nightmarish Helen Keller world. Yet merely "waking up with vision" is not a straightforward thing, neurologically. It takes infants many months to learn how to "see properly." People who suddenly recover from blindness as adults usually experience severe visual agnosia, and ultimately depression. [See this article.]

Bottom line, it's not clear to me how a Kurzweil machine can be truly Homo-complete in any meaningful sense. All it will be is a pseudo-conscious logic machine with, at best, inappropriate emotions based on retarded social development, and at worst, disturbing sociopathic tendencies. The prospects of such a machine being a worthy companion along the lines of the robot in the Twilight Zone episode The Lonely are slim to none.

In tomorrow's post, I'll get more specific about why Kurzweil's 2029 prediction is wrong.

Sunday, January 20, 2013

Conspiracy Thinking and Insanity

Yesterday I blogged about the nature of paranoid-schizophrenic thought processes. I mentioned that many times, paranoid delusions are persecutorial in theme.

Somewhere between 6% and 20% of the American public
(depending on whose stats you believe) think
the moon landings were an elaborate hoax.
When a persecutorial thought is bizarre-sounding by accepted social norms (admittedly a fuzzy criterion, but that's what it comes down to) and has no provable basis in fact, yet a person clings to such thoughts as though they are perfectly legitimate (perfectly factual), the thoughts in question can be said to constitute delusional thinking.

All of us engage in paranoid fantasies from time to time, and some of those thoughts (if they're persistent enough, and deeply believed) can probably be called delusional.

But what happens when a sizable group of people latches onto the same delusional thoughts?

Answer: Sometimes it becomes a political faction.

Alexander Zaitchik, writing in an article called "Patriot Paranoia: A Look at the Top Ten Conspiracy Theories," points out:

Over the last two decades, a far-right conspiracy culture of self-proclaimed "Patriots" has emerged in which the United States government itself is viewed as a mortal threat to everything from constitutional democracy to the survival of the human race. This conspiracy revival — which has been accompanied by the explosive growth of Patriot groups over the last year and a half — kicked into overdrive with the 2008 election of President Barack Obama, who is seen by Patriots as a foreign-born Manchurian candidate sent by forces of the so-called "New World Order" to destroy American sovereignty and institute one-world socialist government.

Zaitchik points to such popular right-wing conspiracy theories as the idea that military research at Fort Detrick was behind the avian flu virus (or the AIDS virus) and that the U.S. government itself deliberately brought down NYC's Twin Towers (and/or nearby buildings) through controlled demolition, in order to begin instituting a police state (with repeal of personal freedoms, increased ease of wiretapping, and mysterious goings-on by a Homeland Security agency tasked with spying on, and detaining, ordinary citizens). To be fair, the latter bit of delusional nonsense isn't just a far-right theory. There are those on the far left who also spout the "9-11 Attack as Government Plot" line of bull.

Being open to bizarre ideas (such as the notion that the moon landings might have been an elaborate hoax) doesn't make you crazy ipso facto. But clinging to such ideas to the point where you're unwilling to consider evidence-based alternatives clearly puts you on a different part of the sanity/insanity continuum than the rest of us. Particularly if you're living your life around the paranoid idea(s) in question.

Personally, I consider the moon-landing-hoaxers and the 9-11 conspiracy theorists to be a bit crazy. Not full-on psychotic, of course (although some no doubt are). But such people are definitely engaging in psychotomimetic thinking. And they live outside social norms. That alone makes them crazier than the rest of us.

The bottom line? Sanity and insanity are not absolutes. They're imaginary goal posts on a playing field that runs the gamut between normal and abnormal. It's a good idea to know what part of the playing field you're spending most of your time on. If you lack enough self-awareness even to do that, look out. Consider it a danger sign.

Saturday, January 19, 2013

When Is a Crazy Thought a Crazy Thought?

The other day I was reading a chapter-in-progress from Sally's schizophrenia memoir. It's a chapter describing "a day in the life" from her most florid psychotic period twelve years ago. In it, she describes seeing "coded messages" in the arrangement of everyday objects (like toothbrushes and bars of soap), "messages" that were being crafted just for her by nameless spies intent on messing with her head.

(By the way, if you want to see the book chapter I'm talking about, in draft form, Sally and I will be sending it out in a few days to everybody who has signed up for book updates. See the form at the bottom of this page if you want to be on our mailing list.)

Sally's cognitive parsing process, when she was in the midst of psychosis, was undeniably bizarre, but there was always logic behind it. Her delusional thoughts weren't just random, fleeting figments. They were reasoned thoughts. The schizophrenic mind puts a huge amount of effort into trying to decipher sensory reality according to rules (rules that make sense to the schizophrenic mind).

Many of Sally's delusions were persecutorial in nature: They're out to get me. The idea that a nameless, invisible "them" (or "they") might be "out to get you" sounds laughable to those of us who consider ourselves "sane." And yet, this type of thinking is actually quite common. When you listen to far-right-wing rhetoric on talk radio (or coming from commentators on the Fox News Channel), what do you hear? Quite often, you hear paranoid rants about how "the government" is trying to take away your basic freedoms (or your money, your gun rights, your right to worship, or what have you). If it's not the government that's out to get you, then it's those pesky secular humanists, or perhaps the Trilateral Commission, or maybe the Freemasons acting in concert with the Illuminati, or maybe nameless, faceless forces sympathetic to the New World Order.

Paranoid thoughts of this general type are extremely common. A 2006 study by researchers at the Institute of Psychiatry, King's College London found, in surveying 1,200 "normal people," that 
  • over 40% of people regularly worry that negative comments are being made about them
  • 27% think that people deliberately try to irritate them
  • 20% worry about being observed or followed
  • 10% think that someone "has it in" for them
  • 5% worry that there's a conspiracy to harm them
These sorts of thoughts are qualitatively no different than thoughts that someone with paranoid schizophrenia would have. They're "crazy thoughts."

When do paranoid thoughts become pathological delusions? For someone with schizophrenia, paranoid thoughts tend to be greater in number; more frequent in occurrence; more elaborate; and more believable, than for the rest of us. A person with schizophrenia usually believes fervently in the factual basis of his or her most bizarre thoughts, to the point of becoming obsessed with them; and the overall effect is to leave the person confused and full of fear, to the point where the person might be terrified to step into the next room, let alone leave the house.

One of the things I've learned from talking to Sally (and reading her book manuscript) that has fascinated me is the extent to which "normal people" think crazy thoughts.

Perhaps it shouldn't be so surprising. After all, societal norms establish the limits of "normal" thinking and behavior (by definition). Sanity is thus, to a degree, socially constructed. Words like "sane" and "insane" are artificial constructs that have no objective meaning. We all live somewhere on a sane/insane continuum.

Consider the fact that schizophrenia sufferers often associate specific meanings with individual numbers (perhaps associating "anger" with the number four, say). Many clinicians consider this sort of illogical association to be a hallmark of psychotic ideation. And yet, most "normal" people believe thirteen to be an unlucky number, which is fundamentally no different from a schizophrenic person believing that four means anger. (There are hundreds, perhaps thousands, of very tall buildings in the United States that have no thirteenth floor, precisely because so many people are convinced of thirteen's potential for evil.) Believing thirteen is "unlucky" doesn't make you mentally ill. Western society accepts the idea that thirteen is unlucky, even though it's a profoundly bizarre concept, qualitatively no different from a "crazy person's" thoughts. But if you believe that all whole numbers from one to fifty have specific individual meanings, that doesn't fit with society's norms. And if ideas about numbers are coming into your head all the time, out of control, causing you to feel so much anxiety that you can't go about your daily business, that's a problem; that's mental illness.

I thought I knew a lot about these sorts of things before reading Sally's book, but as I read each freshly written chapter, I find I'm still learning new things, making fresh associations, filling in the gaps of (mis)understanding; having new ideas. Some of them a little crazy.

It's exciting to see Sally's book coming together. When she finishes it, it'll be quite a read.

Friday, January 18, 2013

The Serif Readability Myth

I've been involved in publishing all my life, and like many others I've always accepted as axiomatic the notion that typefaces with serifs (such as Times-Roman) are, in general, are more readable than non-serif typefaces (e.g., Helvetica). It never occurred to me that there was any doubt about the matter. Were the monks who invented serifs and other text ornamentations merely engaging in idle doodling? Weren't they consciously intending to increase the legibility of the important documents they were transcribing?

It turns out that, as with so many of the things we "know" are right, the idea that serif typefaces are more readable than non-serif typefaces simply isn't supported by the evidence.

At first, I scoffed at the idea that what everybody in the design world knows to be "obviously true" simply isn't. But then I happened upon the remarkable 1999 Ph.D. dissertation of Ole Lund (then of Høgskolen i Gjøvik), titled "Knowledge construction in typography: the case of legibility research and the legibility of sans serif typefaces" (download here).

It's impossible to do justice to Lund's stunningly thorough (and beautifully written) 287-page dissertation in a short space. You have to read it for yourself.

Lund undertakes an exceptionally detailed and critical review of 28 typeface legibility studies conducted between 1896 and 1997. He finds serious methodological problems in nearly all of them. Legibility itself is still poorly defined, even today, and is not well distinguished from readability. It turns out a surprising number of otherwise convincing "legibility studies" have been based on reading speed or reading comprehension, which have no bearing on glyph recognition per se. Reading speed is now known to be mainly a function of cognition speed, which varies considerably from individual to individual and is not related in any straightforward way (and possibly in no way) to typeface design. Reading comprehension is even further removed from type design.

Even if legibility is defined in terms of symbol recognition, one must decide how, exactly, such a thing is to be measured. Two common methodologies are variation of time of exposure (an attempt to measure speed of perception) and variation of distance ("perceptibility at a distance"). There are also methods based on type size. All have complicating factors. Harris [3] points to evidence showing that it is very likely that time of exposure methods as well as the variable distance method favor typefaces with relative large strokewidth, regardless of serifs. Type size is complicated by the fact that larger point-size fonts are not shaped the same as smaller point-size fonts, for a given font.

Designer George E. Mack, commenting on the concept of legibility in Communication Arts [5], said:

The basic concept is so tangled up in decipherability, pattern recognition, reading speed, retention, familiarity, visual grouping, aesthetic response, and real life vs. test conditions that contradictory results can be obtained for the same type faces under different test conditions.

Part of our "accepted wisdom" on the legibility of serif typefaces comes from research in cognitive psychology (most famously the work of Bouma[1]) around the notion that words are recognized not on a strict letter-by-letter basis but by the outlines or contours made around the word shape. This research has long since been shot down, as pointed out by Kevin Larson [4], who notes: "Word shape is no longer a viable model of word recognition. The bulk of scientific evidence says that we recognize a word’s component letters, then use that visual information to recognize a word."

One of the most-cited "authorities" on serif legibility is Cyril Burt, whose 1955 article [2] in The British Journal of Statistical Psychology (a journal he was the editor of) seemed to end the debate on whether serif typefaces are more readable than non-serif typefaces. However, Burt's statements about the supposed superiority of serif fonts turned out to be nothing more than idle conjecture dressed up to sound scientific. After his death in 1971, Burt's landmark work on the heritability of I.Q. was discredited (and his reputation destroyed) based on his use of nonexistent data and nonexistent coauthors. Rooum [7] and others found Burt's typeface research to be bogus as well (his coauthors on the 1955 typography paper seem to be fictitious). Today, anyone who cites Burt is citing discredited nonsense, basically.

So before you go around claiming that serif typefaces are easier to read than sans-serif typefaces, you might want to do a little checking around. The embarrassing truth is, there's no solid research to back up that claim. It's one of many myths you (and I) have accepted as true, that isn't.


References

1. Bouma, H. 1973. "Visual Interference in the Parafoveal Recognition of Initial and Final Letters of Words," Vision Research, 13, 762-782.

2. Cyril Burt, W.F. Cooper, and J.L. Martin. 1955. 'A psychological study of typography'.
The British Journal of Statistical Psychology, vol. 8, pt. 1, pp. 29-57.

3. Harris, J. 1973. "Confusions in letter recognition." Professional Printer, vol. 17,
no. 2, pp. 29-34

4. Larson, Kevin. 2004. "The Science of Word Recognition."

5. Mack, George E. 1979. 'Opinion/Commentary'. Communication Arts, vol. 21,
pt. 2, May/June, pp. 96-97

6. Poole, Alex. 2012. "Fighting bad typography research."

7. Rooum, Donald. 1981. "Cyril Burt's 'A psychological study of typography': a reappraisal," Typos: a journal of typography, no. 4, pp. 37-40. London College of Printing

Thursday, January 17, 2013

Riddles have no place in job interviews

I've seen "tech recruitment" from both sides of the desk. I have been a job applicant, and I have been a hiring manager. Neither role is pretty.

One of the unprettier sides of the hiring process in R&D is the on-site-interview stage, when the hiring manager (or one of his peers) gets to ask the applicant highly technical domain-knowledge questions. This can be done skillfully or poorly. It gets ugly fast when it becomes a hazing ritual based on riddle-solving.

The correct answer to riddle questions.
It's one thing to ask an open-ended technical question that lends itself to straightforward answers (e.g., "What are some things you could do to minimize the time spent in garbage collection?"). It's quite another to subject the interviewee to game-show riddles. "Four people want to cross a bridge. They all begin on the same side. You have twelve minutes to get all of them across to the other side. It is night. There is one flashlight. A maximum of two people can cross at one time," etc.

My advice to job-hunters: Don't hire the employer who subjects you to such assholery.

When I say "don't hire the employer," I'm referring to the fact that a job interview is a two-way process. The employer is interviewing the prospective employee, but the prospective employee is also interviewing the employer. Each is hiring the other. Both actors should be asking questions. (Reasonable questions.) Both should be engaged in meaningful conversation. Meaningless riddles are out-of-band.

Reject a riddle by asking if you can have a more concrete, job-related question. Ask if there's perhaps a difficult problem currently receiving attention in the department you'd be working in. Ask if you can take a crack at that problem, or something just like it.

If you're lucky (and if the interviewer is anywhere near as smart as he or she thinks he/she is), the interviewer will pick up on the fact that you're a serious, pragmatic individual with domain expertise and intelligence, who is anxious to apply hard-won knowledge to real-world problems. You're not a game-show contestant.

A really stubborn, inflexible interviewer will stick to the riddle strategy and defend it by saying something like "I don't really care if you get the question right, I just want to see how you think." Which is completely ludicrous. A candidate who immediately produces the "right answer" to a riddle will always impress this kind of interviewer far more than someone who doesn't. That's the whole point of riddles. If a person really wanted to "see how you think," wouldn't he or she want to get to know you a little bit, perhaps draw you out with a series of simple questions? Wouldn't it mean engaging you in two-way conversation about something meaningful?

Ask yourself: Do you want to work for the kind of manager (or company) that sees its new hires as successful game-show contestants?

The right thing to do if you're an interviewer who wants to see how a candidate "thinks" (or "reasons" or "problem-solves") is to ask open-ended questions that are both job-related, and call for domain expertise.

If you're hiring a Java programmer, by all means ask an open-ended question like "What would you do if an application is failing because of OutOfMemoryErrors?" This could lead to discussions (and further questions) around a whole host of issues relating to checked and unchecked exceptions, memory leaks, garbage collection, design patterns, good coding practices, debugging strategies, etc. Within a few minutes, you should know a lot more about the applicant's qualifications than whether or not he or she bought this year's "most asked job interview riddles" book before coming to the interview.

Let's be clear. There's absolutely no need, ever, to subject an interviewee to questions for which there's a "trick answer." The hiring process isn't about tricks and games, is it?

If you're a job candidate and you feel an interview is going in an inappropriate direction, it's up to you to speak out. Don't forget, you're doing some interviewing here, too. Ask politely if you can have another question. If the interviewer sticks with riddles and says "I just want to see how you think," you're dealing with a certain kind of person (colloquially known as a braying jackass), so dumb it down and (politely) ask the interviewer if you can have another question, job-related, that will allow you to demonstrate how you think. You may even have to suggest possible questions yourself, if the interviewer is a bit thick.

Hire an employer who values the real you, not the game-show you. Unless, of course, you're into humiliation and Who's the Alpha Dog bullshit, in which case, may you find happiness and fulfillment together.

Wednesday, January 16, 2013

Muda, Mura, Muri: A Writer's Perspective

Darn those pesky Japanese.

Sometime over the last few years, when I wasn't looking, the Japanese muda-mura-muri meme slipped into the lexicon of people concerned with lean process management and agile software development. Perhaps you're already up-to-speed on it. If not, here's the crash course.

Muda (think French merde) is waste: any activity or excess resource consumption that does not add value to whatever you're creating. Suppose you're doing a load of laundry. Using an excess of laundry soap would be muda.

Mura is any imbalance or unevenness in your process, whether brought on by you, or imposed on you from the outside. You can think of it as the imbalance that arises from disruption of an otherwise smooth process. Henry Ford famously chose black as the only paint color you could have on a Model T. That's because he knew that different colors of paint have different drying times. Ford standardized on one color of paint (black, the fastest to dry) to eliminate mura on the assembly line caused by uneven drying times of paints.

Muri means unreasonable, impossible, or overburdening. For example: Putting two tons of cargo in a truck designed to carry one ton is muri.

Muda, mura, and muri are often interrelated. If you're running a delivery business and you overload a truck (muri) to the point where it breaks down, that truck's load will now have to be redistributed to other trucks, which will disrupt the normal routes of those trucks (mura) and doubtless cause them to burn more gas and wear out break pads faster due to the added weight and extra time spent on the road (muda).

It's possible to apply the muda-mura-muri meme to writing. One can think of a static interpretation as well as a process-oriented interpretation.

In the static view, verbosity obviously constitutes an example of muda: Extra words are wasteful. Uneven coverage of a topic (for example, devoting too much attention to an Introduction and not enough to a Conclusion) is an example of mura. Overuse of a particular word, phrase, quotations, or example is muri.

I once had the opportunity to look at the first draft of the first chapter of someone's novel, in manuscript form. It was written in third person, and I was struck by the constant use of "he/him/his" pronouns. In fact, I think I calculated that over 12% of the words in the chapter were he, him, or his. That's asking one pronoun to do too much. That's muri.

From a process point of view: Suppose you have an important writing assignment that will require you to come up with a 10,000-word document in 20 working days. Obviously, to avoid mura (imbalance), you should pace yourself so as to produce around 500 finished words a day, or maybe you should plan to produce 1000 words of raw output per day for ten days, then revise 1000 words per day for another ten days.

If the writing of your piece requires ten rough drafts, that's probably muda: pure waste. No one should need to write ten drafts of anything.

The standard college trick of doing research for 19 days, then pulling an all-nighter, is a gross example of muda, mura, and muri all in one: You can't realistically do 20 days of writing in one all-nighter. That's muri: unreasonable burden. The unevenness of the process (19 days of no writing, then one day of frantic writing) is mura. The sheer wastefulness of spending a disproportionate amount of time on research (allowing only a tiny amount of time for actual writing) is muda.

So, but. That concludes the Japanese meme portion of today's lesson. Now if you'll excuse me, I have to get back to writing the same old muda for a living.

Tuesday, January 15, 2013

How to Write an Opening Sentence

This post has moved to: http://author-zone.com/write-opening-sentence/. Please forgive the inconvenience.  


When I was a 25-year-old Senior Editor of The Mother Earth News, I did a lot of rewrite editing. And when I say rewrite editing, I mean burn-the-original-and-start-over editing. This was back in the day when John Shuttleworth (Mother's founder) was alive. Shuttleworth believed in total rewrites of everything.

John was a stickler for what he imagined was inventive, original writing, and I got in trouble countless times for being unimaginative. He pilloried me if any two articles had the same sort of introductory statement. For example, I made the mistake, once, of starting two stories with "If you're like most people [rest of sentence]." Shuttleworth stormed into my office quaking with anger. "Starting any story that way is idiotic!" he barked. "Doing it more than once is criminal. Don't ever do this again."

I got lots of practice, at The Mother Earth News, writing leads to stories (my own stories, and others'). I found it was difficult to come up with truly original introductory paragraphs. Still is, sometimes.

I've noticed that other people have trouble with leads, too; even experienced professionals.

Not long ago (and by the way, "Not long ago" is actually not a bad way to begin any piece of writing), I read three books on fiction writing by Don Maass: Writing the Breakout Novel (2001), The Fire in Fiction (2009), and Writing 21st Century Fiction (2012). The first two are decent (if unremarkable) craft books. The third is well above average. But the books have something to teach (inadvertently) about introductory sentences.

In Writing the Breakout Novel, four of eleven chapters begin with a question. In The Fire in Fiction, six of nine chapters begin with a question (and one other chapter has a question as its second sentence). The prevalence of chapters (and chapter sections as well) that begin with questions is abnormally high in those two books. If you look at the Gotham Writers' Workshop's Writing Fiction, you find eleven chapters written by eleven different authors, and not a single chapter begins with a question.

I suspect that whenever Don Maass gets blocked, he unblocks himself by asking: "What question am I trying to answer here?" Then he starts the chapter (or section) with that question.

Or at least, that's what he used to do.

By 2012, Maass abruptly (and conspicuously) abandons the question-crutch. In Writing 21st Century Fiction, not one chapter begins with a question.

Starting a piece of writing with a question is not always a bad thing to do. But beware, it's done a lot. It's overdone. My advice: Don't go there.

Starting a piece of writing with "If you're like most people," or "Most people would agree that," etc., is also overdone. It's trite. Don't go there.

Ditto "There are those who say [whatever]."

Starting with a famous quotation: Sometimes works. More often than not comes off sounding trite.

How should you begin a piece? Let's start by taking a look at some of the chapters in the Gotham Writers' Workshop book, Writing Fiction.

Alexander Steele's chapter, "Fiction: The What, How and Why of It," starts with: "Hello, you look familiar."

Brandi Reissenweber's "Character: Casting Shadows," begins with: "When I taught creative writing on a pediatrics ward at a hospital I met a long-term patient, a thirteen-year-old girl who . . ."

Valerie Vogrin's chapter on point of view: "When I consider a photograph of myself taken from several feet away I see a caricature . . ."

Chris Lombardi on Description: "About twelve years ago, my best friend was reading a draft of a story I'd written about a woman recently returned from years in a far-off country . . ."

Allison Amend begins her chapter on Dialog with: "I've been on a lot of bad dates. A lot."

Caron Gussoff on Setting begins with: "I've lived in seven different states in fifteen years."

Corene LeMaitre on The Business of Writing starts with: "My first memory of meeting with an editor is of being soaking wet."

Okay, I think you see the point. Professional writers, whenever they can get away with it, like to begin a piece of nonfiction with a personal aside or a personal story. But what if you're writing a piece that doesn't permit the use of first person? Consider giving a third-person account. "In September of 1992, two men wearing ski masks and full-body camouflage outfits walked into a pawn shop in Miami . . ." Malcolm Gladwell starts nearly every piece he writes this way.

You can start with a blanket statement. Chapter Nine of Sol Stein's excellent How to Grow a Novel begins with: "A writer cannot write what he does not read with pleasure." Chapter Fourteen begins with: "All fiction writers are emigrants from nonfiction."

Sometimes you can just be stark-blunt about what you intend to do. Chapter Eight of Stein's book, on "Getting Intimate with the Reader," starts out: "This is a chapter about opportunities."

If you're writing a blog post about unequal pay of women and men, you can start with: "This post is about unfairness." Just tell the reader what the subject is.

If you're writing about a difficult subject (for example, rape), you can begin: "Rape is not easy to write about."

Make an exaggerated statement, then tone it down. "In Prohibition days, alcohol could be purchased illegally on every street corner. Actually, that's an exaggeration, but in fact it's true that . . ."

Involve the reader in a bit of conjecture. "Suppose you were faced with the choice of living with cancer every day, or obtaining treatment that may or may not work, at the cost of becoming bankrupt and homeless."

Sometimes you can start with a statistic. "This year, over two hundred thousand Americans will be diagnosed with lung cancer."

Summarize the current state of affairs, then tell how it's changed recently. "Until recently, new MBA graduates could count on getting a job straight out of school. That's no longer the case."

Put up a straw man and knock it down. "The conventional view of [XYZ] is [ABC]." (That's the straw man.) "But it turns out the conventional view is wrong." (That's knocking it down.) Naomi Klein often uses this technique.

Bottom line: When you're stumped as to how to begin a piece of writing, consider doing one of the following:
  • Simply tell the reader what the subject is.
  • Make a blunt statement.
  • Cite a statistic.
  • Tell a first-person anecdote that's relevant to the subject.
  • Tell a third-person anecdote.
  • Put up a straw man, then knock it down.
  • Summarize a current state of affairs (or the conventional wisdom), then tell what's changed.
  • Summarize previous research, then tell what new research has found.
  • Involve the reader in a bit of conjecture.
  • Start with a quotation from a famous figure. (But beware of triteness.)
  • Commit an egregious exaggeration. Then explain what the (less extreme) reality is.
These aren't all the possible ways to start a piece, but if you're completely stuck, one of them should work. If not? Surprise the world with something outlandishly original. Don't be like most people. Don't just start with "If you're like most people . . ."


Monday, January 14, 2013

A Schizophrenia Memoir in the Making

On New Year's Day, I posted "How I Fell in Love with a Schizophrenic," which was seen by over 60,000 people in 144 countries and garnered more than 450 comments, total, around the Web, at places like Reddit, Metafilter, Hacker News, Filtred Mind, and here on my own blog.

Sally's home from the hospital now. Like me, she was astounded at the popularity of my New Year's post. Because of the obvious interest in her story, I told Sally she should seriously consider writing a memoir. She agreed. And that's what she's spending her days doing now: writing the story of her descent into schizophrenia, and the struggle to come back into the real world. The working title of the book is Almost Normal.

We're currently looking for a literary agent. (If you know of any that might be interested, please point them our way.) Whether or not Sally's memoir is agented, we'll be looking for a publisher. And if we don't enlist the help of a New York (or other) publisher, we'll self-publish, obviously.

At the bottom of this page is a signup form. If you want to follow the progress of Sally's book, including our travails trying to find an agent and a publisher, enter your e-mail address in the form and you'll hear from us once or twice a month with updates. Your e-mail address will not be shared with anyone, nor will you be bombarded with newsletters.

Oh, and did I mention that if you sign up, you'll get the occasional chapter-in-progress that Sally's working on? Or that you'll get to see the query letter we intend to send to agents? And you'll get to see what the various responses of the various literary agencies are like? And what our "take" is on the whole process?

We'd love to have you join us on this ride through the twisting, turning path to publication. It's bound to be a lot of fun.

So please, let's stay in touch. We'd love to hear from you. Once you sign up, you'll get a thank-you note with our e-mail address, in case you want to write directly.

Thanks. Wish us luck.

Be well.