The Language of Autism: “Special Interest” as a Stigmatizing Phrase

When an a non-autistic person studies something deeply, it’s an “area of expertise,” and the acquisition of such expertise is considered a commendable accomplishment. When an autistic person studies something deeply, it’s a “special interest,” and it’s considered a symptom of pathology.

Nick Walker

Nick’s post on Facebook really hit home for me, because, not long ago, I had been involved in an exchange about this very topic.

It’s hard to express how infantilizing and degrading it feels to hear the phrase “special interest” in connection with autistic behavior. I’m now in my eighth decade on this planet, so the term is not often applied to me; nevertheless, I cringe whenever I hear it. And I recently had an experience in which I was asked to describe my “special interests” — I can’t even…

I received a query from a local professional organization as to whether I would be willing to do an interview with a writer for a prominent national magazine. My understanding was that the reporter was interested in gathering information from autistic people with a variety of interests.

In an email to the writer, I expressed my willingness to be interviewed, and I provided quite a bit of background information about myself, hoping that would make the interview process go more quickly and smoothly.

In response, I received a form letter, with 5 questions. The first one was “tell me about yourself” and included items that I had already fully answered. The other 4 questions were about my purported “special interests.”

Naturally (as you will surmise, I suspect, if you are autistic), this got my dander up. Instead of answering the questions, I sent back a short essay about why I find the term “special interests” to be objectionable. I’ll summarize some of the points here, and expand on others. This list is not the response I gave, but it contains the essence of my essay. Some things to think about. In all of this, please keep in mind that I know that a lot of what I say is speculative, and I realize that I speak only for myself, not for anyone else, let alone autistic people as a group. We are just as varied among ourselves as neurotypical people are.

That said, here are my thoughts:

  • What is it about an autistic person’s interests that make them “special”? Why are not the interests of neurotypical people “special”? What do you call an interest that is not “special”?
  • The request struck me as asking me to make fun of myself. As if having a deep interest in something was in fact odd, and amusing, much as one would humor or demean a person who collects worthless objects of some kind. All of this brought to mind the phrase coined by Jim Sinclair, an early pioneer of autism self-advocacy.
  • We are not “self-narrating zoo exhibits

  • If a young autistic person, such as my friend Tim Page, has a deep interest in music, is that a “special” interest? What do you call it when his knowledge of and love for music leads him into a career as a music critic, for which he is awarded a Pulitzer Prize?
  • What do you call the interest that a kid had in ants that led him to spend all of his free time exploring the woods, turning over rotten logs, and studying ant colonies? What do you call it when he becomes a professor at Harvard and the world’s foremost authority on ants? And one of the world’s foremost authorities on evolutionary biology and environmentalism? I’m talking, of course, about Edward O. Wilson. I don’t know if he’s autistic, although I’d guess so, but does it really matter? He is certainly one of my heroes, autistic or not. I have only met him once, and did not have enough time with him to form an opinion of his neurological status, but I do find it telling that he was the one to solve the social insect problem that plagued Darwin and caused him to delay publication of his theory of natural selection. And I’m quite sure that Darwin must have been autistic. Another likely autistic was a famous person born on the same day as Darwin. Abraham Lincoln‘s self-description is about as good an explanation of how the autistic brain works as I’ve ever seen.
  • I am slow to learn and slow to forget that which I have learned. My mind is like a piece of steel; very hard to scratch anything on it and almost impossible after you get it there to rub it out.

  • What do you call an interest in baseball that is so intense that it leads a person to neglect personal relationships? What do you call it when that interest becomes an obsession with being the best hitter of all time? And what do you call it when he makes that happen? Again, I don’t know if Ted Williams was autistic, but all the signs are there. Again, I could ask, does it matter? But maybe it does. Maybe the question is, could this have been achieved by a person who was not autistic?
  • What about Tesla, and Jefferson, and Einstein, and countless other (presumed) autistic people whose interests led to wonderful discoveries? Many pioneers in mathematics likely were autistic. Newton and Turing, to name the obvious. I’m not sure if the brilliant Pascal was autistic, but his interest in a gambling problem led to the groundwork upon which probability theory (statistics) was built.
  • What, in general, was the role of autistic people in history? We are known for our ability to recognize patterns (and deviations from patterns). My friend John Robison has speculated that autistics might have made up the priestly castes in ancient cultures; the ones who noticed the patterns of the sky and the seasons, and invented calendars and astronomy. They kept the records of floods and growing seasons, and made civilization possible. More recently, he has written about the possibility that early Pacific Ocean navigators were autistic.
  • The list goes on. My Hall of Fame grows.

Much of this is speculative, of course. Not every innovative person is autistic. And not every autistic person produces world-changing discoveries. Many autistic people who have focused interests may pursue them simply because they are satisfying in some way. Some may produce innovations that benefit only themselves or a few people around them. Some neurotypical people may have obsessive, focused interests that rival those of the most intense autistic people. Some autistic people may not have any obvious such interests. The world is as varied as the number of people in it. And thank goodness for that!

Please drop the modifier “special” when talking about the interests of autistic people. We know we’re different. We are aware of that; it is an awareness that is deep in our souls, from our earliest days of self-awareness, if my own experience is any guide. We want to be proud of our achievements, based on what we accomplish. Not as autistic people, but as people. Thank you.

Another Candidate for my Autism Hall of Fame: John Couch Adams

Isaac Newton (1642-1726) is often mentioned (and rightly so, from what I can tell) as having probably been autistic.

Now, I learn of a later-day (1819-1892) kindred spirit.

John Couch Adams is known to history as having been hot on the trail of the discovery of Neptune, only to be beaten to the punch in 1846 by Urbain Jean-Joseph Le Verrier, who produced similar (and more accurate) calculations of its presumed orbit, enabling astronomer Johann Gottfried Galle to definitively identify the planet for the first time.

Neptune had been seen and noted by other astronomers, including Galileo (in 1612), but they had not recognized it as a planet. Over the years, irregularities in the orbit of Uranus had led to speculation that there might be another, more distant, planet whose gravitational pull was influencing the slightly erratic pathway of Uranus through the heavens..

Adams evidently had investigated this problem, and had either figured out the orbit of the unknown planet, or knew how to do so. He had presented his preliminary findings to the British Astronomer Royal, George Biddell Airy. Neither man actively pursued an empirical investigation, so the honors of the discovery of Neptune went to Le Verrier and Galle.


  • forgive the links to wikipedia: I’m aware this can be an unreliable and biased source at times, but there is also a plethora of external sources for well-known stories such as these, for those who wish to read more
  • to see a full-sized copy of the graphic below, click on the image, and then use the “back” function in your browser to return to this post

In my reading about this story, the article that caught my eye with respect to autism was one that appeared in Scientific American in December 2004, not long after some missing papers were found (in 1998) that filled in gaps in the historical record. It’s a good article, but it is unfortunately behind a paywall. The article is here for those who have access to it, and here is a graphic that summarizes the discussion.

The overview of the article states

  • The early 19th century had its own version of today’s dark matter problem: the planet Uranus was drifting off course. The mystery was solved in 1846, when observers, guided by theorists, discovered Neptune. Its gravity could account for Uranus’s wayward orbit.
  • Historians have traditionally apportioned credit between a French theorist, Urbain Jean Joseph Le Verrier, and an English one, John Couch Adams. Le Verrier’s role is undisputed, and so was Adams’s–until the mid-20th century.
  • Just as more historians were beginning to reexamine Adams’s role, a sheaf of crucial documents went missing from a British archive. It surfaced in Chile in 1998. The authors came across other crucial documents this past summer.
  • The bottom line is that Adams did some interesting calculations but deserves no credit for the discovery.

In writing about Adams, the authors say

His life paralleled that of Isaac Newton in some respects. Both grew up in the English countryside — Newton as the son of an illiterate yeoman in Lancashire, Adams as the son of a sharecropper in Cornwall. Both were interested from an early age in the regularities of mathematics and natural phenomena; both drove pegs or cut notches in window ledges or walls to mark the seasonal motion of the sun. They had similar idiosyncrasies: sobriety, fastidiousness, religious scrupulosity. Contemporaries viewed them as dreamy, eccentric and absentminded. Both Newton and Adams would probably be regarded today [2004] as having Asperger’s syndrome, sometimes known as high-intelligence autism.

Other anecdotes in the article also paint Adams as the typical “absentminded professor” that Hans Asperger himself had mentioned as an example of how autism is not rare, but can be seen all around if one knows what to look for.

Adams arrived to take up his studies at Cambridge in 1839.

His landlady said she “sometimes found him lying on the sofa, with neither books nor papers near him; but not infrequently … when she wanted to speak to him, the only way to attract his attention, was to go up to him and touch his shoulder; calling to him was no use.”

Oh, how familiar that is to me! I can’t say how many times I was told I had been addressed, and made no acknowledgement of having been spoken to, although I seemed alert. Sometimes I would remember the incident, but often I did not, my mind having been occupied with some problem or thought that excluded the outside world from interfering.

Elsewhere, the article describes Adams as a retiring, self-effacing man, who perhaps could have earned credit for discovering Neptune had it not been for his tendency to procrastinate and his dislike of writing. Again, I can relate to these characteristics, as well as to the description given toward the end of the article:

 A discovery does not consist merely of launching a tentative exploration of an interesting problem and producing some calculations; it also involves realizing that one has made a discovery and conveying it effectively to the scientific world. Discover thus has a public as well as a private side. Adams accomplished only half of this two-part task. Ironically, the very personal qualities that gave Le Verrier the edge in making the discover — his brashness and abrasiveness, as opposed to Adam’s shyness and naïveté — worked against him in the postdiscovery spin-doctoring. The British scientific establishment closed ranks behind Adams, whereas Le Verrier was unpopular among his colleagues.”

In my career as a financial analyst, I always felt if I did a good job and produced superior results, my work would be recognized and rewarded. To a certain extent that was true, but I can see now that a good measure of my success was aided by people around me who were willing to publicize my achievements.

It’s possible that I could have done bigger and better things if I had been more of a Le Verrier type and less like Adams, but I was more than satisfied with what I had accomplished, and really did not crave any more attention. I can see now that this attitude did not necessarily lead to maximizing my economic value (which is okay by me). I really loved doing research and exploring new ideas. That was the thrill for me. Having solved one problem, I was ready to move on. But my employer on Wall Street had other priorities.

At one point in my career, I had been using a set of models and a method of analysis to ferret out values within the largest American companies whose stocks were traded by my firm. I wrote a weekly market commentary and provided detailed lists to my clients. At first it was fun, because I was still learning. After a time, though, I began to get bored, despite the dynamism of the marketplace. I wanted a new challenge.

I began to dabble in analyzing other markets; in Europe, Asia, and elsewhere. I published some (internal) research papers that were well-received, and I had visions of taking over the world (so to speak). My company had just purchased the largest historical database in existence, of financial and market price data, of thousands of companies around the world. But I knew I couldn’t do everything. It would be huge undertaking to tackle this new challenge.

I went to my boss and told him of my desire to undertake this project. I had confidence in the team that worked for me, and I knew that I could figure out how to create a great value-added product. I explained that in order to do that, I’d have to give up the work that I was already doing, because that was a full-time job in itself, but that was okay with me because I felt a need to move on.

It would be another 15 years or more before I would become aware of my own autism. Perhaps with an expanded understanding of how I fit into the world (or don’t), I would have recognized my own naïveté. Peter listened attentively, and he was very supportive. “Sure, you can do that, it sounds great! Just one thing, though. You have to keep doing what you’re doing now. Our clients love you!” I was trapped by my own success!

I can also relate to Adams’s reluctance to push himself on to others. In explaining why he had failed to convince his colleagues at Cambridge to pursue a search for the mysterious planet, he admitted to not taking the time to fully explain his ideas.

I could not expect however that practical astronomers, who were already fully occupied with important labours, would feel as much confidence in the results of my investigations, as I myself did.

This brings to mind a conversation I had with a couple of brothers in a family that appeared to me to have plenty of autism running through all three generations I knew. “We’re not very good self-promoters!” one of them said as we discussed the challenges of setting up a new business. His brother laughed and nodded.

I never got to do that international project. A few months later, I left that firm to take a job in a new city, where I was offered a chance to take on new challenges. I had to take a cut in pay, but I had my priorities.

Adams was honored in many ways during his lifetime. He is said to have turned down a knighthood that was offered by Queen Victoria. In 1881, he was offered the position of Astronomer Royal, but he preferred to pursue his teaching and research in Cambridge.

I Was Ahead of My Time (But I Knew That!)


  • There is a new style of AI (artificial intelligence) that has, in recent years, taken the world by storm. Called Deep Learning, this approach has made possible self-driving cars, enhanced voice and image recognition, and audible translation from one language to another; to name just a few breakthroughs. Most of what we observe being done in such areas would have been only the stuff of science fiction as recently as 15 or 20 years ago.
  • Much of my career was spent creating computer models to analyze the world of finance and investments. For 30 years, beginning in the mid-1970s, I was involved in cutting-edge research that explored ways to control risk, identify arbitrage opportunities, and find value-added investments for the money managers, pension funds, governments, and foundations who were my clients.
  • For most of that period, AI was only a curiosity; a field in which the ideas were bigger than the capacity of computers to carry them out. If you saw the movie The Imitation Game, you may remember the scene in which the computer took more than 24 hours to solve the Nazi code, which changed daily. A few years later, a famous mathematician came to UMass in Amherst for a guest lecture, and announced that he was now able to forecast tomorrow’s weather. The catch was, he admitted, that the calculations were so complex it would take him a month to do it. But eventually, these problems became tractable, and computers “learned” to play chess and do other amusing things in real time.
  • In the latter phase of my computer modeling days, I played with primitive forms of AI, such as neural networks, and feedback models. I found these approaches to be useful additions to the work I was doing, although I realized I could not do all that I would like because of capacity constraints (on my time and in computer power).
  • I’m delighted to find that the things I could only imagine 15 years ago are now being implemented on a large scale. I take no credit for promulgating these ideas; they were the next logical step in the technology of the time. I only hope that these recent rapid advances pressage solutions to some of our most vexing problems. I’m optimistic because I don’t like to think of the cost of failure.

The Old Days

I worked on Wall Street in the 1980s for Morgan Stanley, one of the most technologically advanced companies of the day. Indeed, many of my early projects involved creating automated trading systems to execute sophisticated strategies that simply could not be done efficiently by a human trader.

We had at our disposal IBM’s most powerful and sophisticated mainframe computers, and we coded in APL, a language well-suited to the large data matrices that we needed to handle. Some of the models I created would bring these computers to their knees, so it was clear we were working at the frontiers of this kind of analysis.

When a technical problem arose, I had a number I would call for assistance. The person at the other end would always answer, “VR, how can I help you?” and I would explain my situation. One day, after I’d been working there a while, I asked a co-worker, “What does ‘VR’ stand for?” Oh, I was told, that stands for “voice recognition.” It seems that there had been an ambitious effort to save the traders from writing a ticket for each trade. Instead, they would call a phone number and recite the terms of the trade and it would be recorded directly on the computer system. The tests went well, and all the traders were trained, and they threw away their paper pads. But in real life, it was a disaster! Quickly, a fall-back plan was implemented. Instead of the computer answering the phone call, an actual person would take the call and write up the ticket, then enter it into the computer system. “Voice recognition,” to be sure — the old-fashioned way.

Machines Who Learn

There was an article in the June 2016 issue of Scientific American by this title, linked to (but behind a pay wall) in this write-up. The editorial choice of the pronoun is telling. Machines have begun to think like people. Instead of being pre-programmed to anticipate every possible situation (an impossibility), they are now made to learn from experience, as humans and other living beings do.

The article points out that the advances in AI have been made possible by improvements on two fronts; hardware and software. The hardware of 30 years ago was unable to cope with the demands of even (by today’s standards) simple systems. Among other innovations, there has been (at least) a 10-fold increase in computing speed

thanks to the graphics-processing units initially designed for video games…

On the software side, the early primitive neural networks have moved away from a static, linear model to more sophisticated feedback algorithms.

Reading about this brought back fond memories for me, of my days playing with early versions of these approaches. By the time these models had come into experimental use, I had gone off on my own, and had neither the resources nor the time to pursue them in the ways I could envision them being used. I was too busy earning a living, which involved a lot of expert witness work and real-world forecasting. I wish there had been more time to do pure research, although I did play around enough with the ideas, in what little time I could spare, to derive some insights that helped with the services I offered. That analysis included models I used to forecast foreign exchange rates, among other things.

The following graphic from the SciAm article brought me chuckles of recognition. Recognition of the natural, human variety, that is, not the kind mentioned here:

The basic structure shown here is identical in concept to the primitive neural networks that I created 15 or 20 years ago. Choices are made as to how many input nodes there are to be. This would correspond to information thought to be relevant to the problem at hand. In the case of forecasting currency exchange rates, for example, these inputs might include interest rates, inflation rates, the price of oil, and other exchange rates, among many other variables.

An arbitrary number of “hidden layers” is chosen (guided by experience, to be sure), and historical data are fed into the algorithm, which produces a forecast that can then be compared with the actual historical outcome. The network is “trained” on a bunch of such data until its internal coefficients are deemed to have reached an “optimal” level for forecasting.

I’m being vague here because I don’t want to get bogged down in technical detail. Suffice it to say that I did not find this approach to be very useful. The obvious problems are that there may be relevant variables that had not been included in the data set, or that the historical period studied may not be representative of the world going forward. And the structure in those early days did not allow for the model, once in place, to learn from its experience.

These same criticisms, by the way, could have been leveled against the more conventional econometric models of the time. Those models did not have “hidden layers” but they were linear and static. They worked fine until they didn’t, when the world changed, or some catastrophic event occurred.

I felt a need for a more dynamic approach, and at the time I was learning about complexity theory.

[All of this talk about my work is good material for another post (so stay tuned); for now, just some hints!]

I became familiar with a 1976 paper by Robert May (now Lord May), a naturalist who studied animal population fluctuations. He analyzed the effects of (unrealistically) high growth rate assumptions in the logistic difference equation. This equation had been around for a long time (first published in 1845), and was used to predict the ups and downs of population density. May demonstrated that this equation, which had long been thought to be quite orderly, could in fact produce chaotic results. The thing that intrigued me about this equation was not so much its transition from the orderly regime into the chaotic (which was indeed fascinating), but that it had what I would come to call a “learning coefficient.”

This equation has a feedback component, because the prediction for the next period depends on an estimate of the population’s natural growth rate (assumed to be the same over time), and a measurement of how close the population size is to its maximum (the largest population its environment could sustain). Of course these values cannot be known with precision, but using historical observations as approximations will produce graphs (or maps) that look very much like what happens in nature.

What I took away from my study of this equation was the idea that there could be a built-in “error correction” that could operate dynamically. In the case of animal populations, the corrections were made by a natural response to the availability of resources. When a population is small relative to food supply, growth is rapid. As the size of the population nears its maximum, starvation will reduce the population back to the point that it can start growing again. In real life, things are more complex than this, because of predator/prey interactions — but I digress!

Referring back to the currency-forecasting model I mentioned; my complaint had been that its operation was too dependent on the data set that was arbitrarily chosen as being “typical” of what the future might bring. And it was difficult to know when the world might have changed enough to warrant re-estimating the model. But what if the model could learn from experience? I figured out a way to do this, and I’ll spare you the details, but my tests showed that my new approach was superior to the one I had been using (at least it was when tested on historical data).

So I went “live” with the new model, and incorporated it into the research and advice I was selling. It continued to work well. Mind you, in the world of economics and finance (a bit like weather, I guess), there is no such thing as an “accurate” forecast. The standard, rather, it whether a forecasting algorithm is better than others that are available. Being a little bit better than the competition is all that can be hoped for (and is rewarded) in the fast-paced world of financial markets.

I had dreams of going the next step, which would have involved letting the computer learn how to adjust other features of the model. Adding another layer, in other words, in which the algorithm would “teach” the components that were already learning, by observing their performance and adjusting how they made their adjustments.

I never got to do that, however, because, as the ancient saying goes, life is what happens while you are making other plans. For various reasons, I decided to exit the business, and never got to play with my next level of ideas.

Imagine my delight, then, when I saw the diagram above, and from the description in the article, realized that my vision was now a reality, and is what is driving this new wave of AI.

I’m not claiming any credit for promulgating these ideas, because I did not. They were an obvious outgrowth of the work that was going on 10 to 15 years ago, as I was winding down my involvement. At the time, as the SciAm article discusses, computing power was inadequate to solve many real-world problems. I was already working at the fringes of this line of computing, and it wasn’t easy. But it sure was fun. And it’s fun for me now to see where things have gone.

I have every reason to believe that in another 10 or 15 years, we will be witnessing new applications that are only a glimmer today. Perhaps we will even figure out how to ameliorate the vexing problems that threaten us today, such as overpopulation, deforestation, species loss, and climate change. I’d like to think we can do that, because I don’t want to contemplate the alternative.

The Good Nudge

Another Obama program that may or may not survive in the new Administration.

A recent (January 23, 2017) issue of The New Yorker contains an article (“Good Behavior“) that describes the final days of an Obama initiative to use behavioral science in the service of improving government performance.

The article focuses on the Flint water crisis, and mentions several other projects that have used this approach.

The story is a hopeful one, in the sense that there are possibilities for doing good with the proper application of what is called “choice architecture.” There is also a warning here, that such an approach is value-neutral, so can be used in a negative way, as well. Trump, Hitler, and Stalin are all cited as people who have used “the behavioral arts” to influence public opinion.

George Lakoff has lectured and written on this subject for many years, pointing out many of the same themes that are mentioned in this article.

Language matters; it reveals our values and it helps shape them, for better or for worse.

Look What I Got in the Mail

I was 15 years old.

My grandmother had sent me this notecard. I knew it was coming, because she had told me about it. “Someday,” she told me, “you will be very proud to have this.”

I didn’t have to wait; I was proud of it from the moment she told me about it. I was very close to my grandmother. For years, I often went to visit her and spend time with her in her office after my school day was done. She told me stories. She gave me things to read. She showed me things in her world, which was the Historical Room in the Stockbridge Library. I loved every bit of it.

So now, 55 years later, I am still proud of my heritage.

Let’s See if I Can Tape This All Together

What do cupcakes and chocolate have in common? I guess that’s pretty obvious, but Scotch Tape?

In September 2009 Scientific American devoted an entire issue to “Origins” and I’ve chosen three of my favorites to link together here.

First up: cupcakes: where and when were they invented, and whence the name?


If you click on the images in this post, you will see full-size (readable) versions, in case you care to look at the details. Then, click the “back” icon to return to reading the post.

The point of this “Origins” blurb is that the cupcake is probably an American invention, first noted in 1826, and was likely a variant of the British “pound cake.” I’m not much of a baker, so I used my mathematical propensities to come up with a likely explanation for the names of these two cakes. A pound cake, I reasoned, weighed a pound, and I remember my mother’s folk wisdom, which she drummed into me when I was young and learning about such things in the world, “A pint is a pound, the world round,” she would chant whenever I asked her how many ounces were in a cup or a pint or a quart. As it turns out, things are a lot more complicated than that, but my childhood understanding that a pint of water weighed about a pound, and both contained 16 ounces, made it easier for me to do conversions.

So, when I read that the American cupcake is a downsized version of the British pound cake, and being familiar with the traditional shape of the cupcake, I immediately fantasized that cupcakes must have been baked in small cups, unlike the pound cake, which must have been baked in pint-sized mugs.


In this miraculous Age of the Interwebs, I was able to discover that the origin of the name “pound cake” came from its simple proportions: one pound each of flour, butter, eggs, and sugar. This recipe became popular in the early 1700s, perhaps because it was an easy one to remember. A cake of any size, made with these same ingredients in equal proportions, is called a pound cake.

Although my intuition about the origin of the cupcake name seems to have more support among food historians than the explanation given in the SciAm version, there seems to be some disagreement as to when the name first appeared. Some sources have 1828, instead of the 1826 mentioned here. In any case, the 1796 date is often cited as the date of the first known recipe, published under the name “a light cake to bake in small cups” — a recipe which gives the lie to the idea that it is simply a smaller pound cake, both upon inspection and because the author gives a separate recipe for a pound cake. .

_A light Cake to bake in small cups_.

Half a pound sugar, half a pound butter, rubbed into two pounds flour,
one glass wine, one do rose water, two do. emptins, a nutmeg, cinnamon
and currants.

Evidently the “small cups” referred to any cups that happened to be available, not to the 8-ounce standard measure of a “cup” or “glass” that came to be used later. Metal baking trays came later still, and the paper holders we are familiar with did not come into widespread use until the 1950s.

Early cupcake recipes often followed the “1234” formula, which also makes it clear that they were not a smaller version of the pound cake. “Quarter cakes” these were sometimes called, not because of their size, but because of their four ingredients: 1 cup butter, 2 cups sugar, 3 cups flour, and 4 eggs.

In my excursions through the history of cakes, I noticed (not for the first time) that many older references used the word “receipts” in the same way we now use “recipe” — these words have related etymologies.

In the Middle Ages, a doctor’s instructions for taking a drug would begin with the Latin word recipe, literally, “take!” Recipe is a form used in commands of the verb recipere, meaning “to take” or “to receive.” The verb receive itself comes from Latin recipere, but through French—as does the word receipt, which was once commonly used to mean “recipe.” From its use as a name for a drug prescription, recipe extended its meaning to cover instructions for making other things we consume, such as prepared food.

The “recipe” in a drug prescription is now universally abbreviated to “Rx” — I imagine that most people don’t know what it stands for.

I wonder if my mother ever knew that the British Empire used the Imperial Pint of 20 ounces. There were still 8 pints in a gallon, making the Imperial Gallon 25% more voluminous than the American version. Also, as I discovered in my trips to Canada back in the day, 25% more expensive. MPG were better, though.

I trusted my mother’s keen sense of practicality when it came to dealing with life’s pragmatic challenges. I remember chatting with her one time in the early 1970s, about 10 years after I had left Stockbridge to move to Springfield. I can’t fix the exact date, but I know it was after the moon landing in 1969, and before she moved out of our old house in South Lee, eventually to become the first resident of the new housing project for the elderly in Stockbridge named Heaton Court, after a brief stay in an apartment near the end of Park Street, the street where we had lived in my early childhood.

There was no television set in our home for most of my growing-up years. It remained that way until my mother won a small black-and-white set in a charity raffle conducted by the Elm Street Market. I suspected that Mike Abdulla might have put in the fix for her, probably knowing that we were one of few families in town without a TV. But perhaps is was just a stroke of good luck. In any case, it became a fixture in the house, though in my teenage years I spent less and less time at home, so didn’t really watch it much.

On that day I remember, I looked at the TV set, and that set me to thinking about how our family had made a tardy move into that era. I used to watch TV at friends’ houses, although my mother placed a strict limit of 2 hours on Saturday and 1 hour per day during the week. I had to choose carefully. The first time I ever saw a TV show was in the Rinsma’s house on Yale Court. We were invited over to watch their new set. The screen was probably 8 or 10 inches on the diagonal, set inside a huge cabinet, and it was hard to see, what with all the people crowded into the living room. The show we were eager to see was live, as were most shows in those early days. Finally, the time arrived, and Ed Sullivan came beaming into our midst.

While pondering that, I remembered many of my mother’s stories of her youth. She had told me that when she lived on Hawthorne Road in Stockbridge in the late 1920s, there were still more people using a horse and buggy to get around than were driving automobiles. She also told me about the early days of radio, and of the trolley cars that used to ply the Berkshires. We would take the train from Stockbridge to Pittsfield once a year to visit Santa Claus at England Brothers Department Store on North Street. That was where I saw and rode, for the first time, an escalator and an elevator!

When I was young, and we were living on Park Street, we had an ice box. It was an exciting day when our first refrigerator was delivered. For years after that, though, my mother would refer to it as an ice box. “Mom!” we would object, “it’s not an ice box, it’s a fridge!” Similarly, she called aluminum foil “tinfoil” because that’s what it had been when she was growing up.

Breaking out of my reverie, I wanted to know what my mother thought of all those changes. “Mom,” I said, “you’ve seen a lot of new technology during your lifetime. You’ve seen automobiles come into common use, you’ve witnessed the advent of television, you know that I work with computers that didn’t exist just a few years ago, and now you’ve seen a man walk on the moon. This must all seem rather astounding to you. I’m just wondering, of all these marvels, and with all the other new things you’ve seen, which one would you say has made the most difference in your life?”

Without a moment’s hesitation, she answered

Yes, that’s right, she said, “Scotch Tape!”

I would never have guessed that. I could see how she might say mimeograph machines, or color film, or something fairly prosaic, given her penchant for down-to-earth results, but Scotch Tape?! That took the cake.

And we can take the cake into the realm of chocolate. My friends Joe and Roxanne have recently experimented with several dietary changes, and have rejected most of them, but have decided to stick with being gluten-free. Needless to say, this has become quite trendy of late, with an article I read not long ago asserting that one-third of Americans are trying to cut back on or eliminate gluten from their diet. I had no intention of being a trend-setter when I moved away from eating gluten about 15 years ago. I had been very sick for quite some time before I figured out the cause.

In my early days of being gluten-free, obtaining bread, pasta, and other basics was next to impossible. Mostly, they were available only in health-food stores, and what was on offer was often unappealing in terms of taste or texture. As more people have discovered that going to a gluten-free diet makes them feel better, demand has increased to the point that almost every restaurant has identified which items on the menu are gluten-free, and supermarkets have special gluten-free sections.

Joe had a milestone birthday last September, and Roxanne secretly made a chocolate cake for him to bring and share with our hiking group. The cake was a big hit, with everyone (me included) exclaiming, “This is gluten-free?” in disbelief.

Joe, being the cook of the family, has shared with me many tips on brands to try of bread, pancake mix, and the like. I’ve been a little hesitant to get into chocolate cake production, however, since I think substituting sugar for gluten is probably not the way to a healthy lifestyle. Chocolate, however, now that’s a different story! At a recent party at their house, chocolate cupcakes were on offer. I begged to be able to take one home, and I was presented with not one, but three, as well as some chocolate chip cookies.

This short piece extols the health benefits of chocolate. The cupcake version comes with a fair amount of sugar, I suppose (I don’t really want to know!). Everything in moderation, I’m told.

Also, I note,

Chocolate may also be good for the mind: a recent study in Norway found that elderly men consuming chocolate, wine, or tea — all flavonoid-rich foods — performed better on cognitive tests.

I don’t know what their definition of “elderly” was, but I’m not taking any chances! Excuse me; I need to go get another cup of coffee.

Birds Did Not Evolve from Dinosaurs; They *Are* Dinosaurs

The January 2017 issue of Scientific American contains a fascinating article (behind a paywall) on the evolution of birds. Using birds as an example, the author makes several interesting points about evolution. Some are quite specific to feathers and such; others are more general, such as

Evolution has no foresight; it acts only on what is available at the moment…

His example of bird evolution points out that feathers evolved long before there were wings and flight; probably originally for warmth, and later for display. He points out

There was no moment when a dinosaur became a bird… It was a journey.


Birds, therefore, are just another type of dinosaur.

Birds as we know them today did eventually evolve long before the larger dinosaurs became extinct. Their mobility (flight) and small size enabled them to inhabit environmental niches that were not able to be exploited by their larger cousins. These same features also enabled them to survive through the cataclysmic event that led to the end of what we think of as the Age of Dinosaurs.

There is a graphic that shows the family tree, in which mammals (which branched off 252 million years ago) are the oldest survivors, followed by lizards and crocodiles. Later came many of the groups of (now extinct) large dinosaurs, such as the sauropods and tyrannosaurs. Later still came groups (also now extinct) that had many birdlike characteristics, followed finally (at least 100 million years ago) by birds resembling those we know today.

The graphic also pictures anatomical features that distinguish birds from other modern animals, such as “quill pen” feathers, a fused spring bone (the “wishbone”), and a keeled sternum to anchor large chest muscles used for flight.

The hallmark traits of birds accumulated over tens of millions of years and in many cases originated for reasons unrelated to the purposes they now serve.

The author does not philosophize or speculate about why birds have been so successful, but he does point out that birds

…carved out a completely new way of life, and today they thrive as upward of 10,000 species that exhibit a spectacular diversity of forms, from hummingbirds to ostriches.

Birds, unlike ants and humans, are not eusocial (although there are a few examples of intergenerational cooperation), nor is flight/gliding unique to them (think bats and flying squirrels). Yet they were here first, and evolution finds it hard to knock out a successful occupant of any particular niche. Perhaps they will still be around long after humans have disappeared.

Addendum: just saw this in the January 23, 2017 issue of The New Yorker:

Frankly, I don’t see the resemblance.

What Do Plants See?

The latest (January 2017) issue of Scientific American has a short blurb entitled “Veggies with Vision” that harks back to speculation and studies of over 100 years ago.


In 1907 Francis Darwin, Charles’s son, hypothesized that leaves have organs that are a combination of lens-like cells and light-sensitive cells.


For some reason, research in this area went dormant until very recently. Now, scientists seem to be again taking up the study of such ideas. Perhaps they learned about it on the wood-wide web (it is now known that plants — including trees — communicate with each other via various chemical signals).


Although the evidence for eyelike structures in higher plants remains limited, it is growing.


I’m looking forward to learning more about this over the coming years. Meanwhile, behave yourself while out walking in the forest!


Following my comments, you will find an excerpt from an original essay, “Walking,” by Henry David Thoreau that appeared in The Atlantic in 1862; there is also a link at the end for those who want to read more.

Henry David Thoreau was the proto-environmentalist.

said Bill McKibben. Thoreau was also the one who, perhaps in a moment of self-reflection, said,

If a man does not keep pace with his companions, perhaps it is because he hears a different drummer. Let him step to the music which he hears, however measured or far away.

Many, if not most, people seem to benefit from time spent wandering in the woods, although Thoreau called us walkers a “select class.” The peace and quiet, the natural beauty, the bird songs, the evidence of creatures passing nearby, the breathtaking vistas; all of these things, and more, inspire a reverence for our place in the greatness of nature.

For those of us who are autistic, though, the call of the wild and the balm of the woods is more than a simple pleasure. It is a welcome, and perhaps even needed, antidote to our quotidian trip through the turbulent world in which we find ourselves. I don’t know if Thoreau was autistic, although many signs point that way. In any case, he has inspired generations of people, autistic and not, with his vision of simplicity and oneness with nature.

Years ago, when I worked long hours in a high-pressure job in the fast-paced world of Wall Street, I found this connection with nature to be an essential ingredient in my every day; the one way to sooth away the stress arising from my job. During working hours, I faced a constant barrage of incoming data, a requirement for social interaction with clients and peers, much travel, high expectations for piercing analysis; all accompanied by the background of city life and its cacophony and chaos.

My escape came in the form of a daily run, first thing in the morning. I averaged six miles a day, and I ran wherever I found myself. When I lived on the East Side of Manhattan, in the early 1980s, I would run with a friend down 2nd Avenue, and then back up 1st Avenue. We ran very early in the morning, before there was any traffic to speak of, so we didn’t have to stop for red lights at most intersections. One frigid winter morning when our breath was not only visible, but cracked and fell to the ground as we exhaled it, my friend turned to me, “Remind me why we are doing this?!” I thought it was for the exercise, but I later realized it was for my mental health.

I moved to the Upper West Side, and that gave me access to Central Park. I had a different running companion, and every morning we ran together around the six-mile loop in the hour before they opened the Park to automobile traffic. When I traveled, I sought out similar venues. In some places, such as Frankfort and Zurich, I stayed with friends or in a hotel on the outskirts of town so that I had access to the countryside. In London, I became fond of St. James Park, Green Park, and of course Hyde Park. Tokyo was much more of a challenge, because the city has very little green space. So I ran around the perimeter wall of the Imperial Palace (several times, to get in my six miles) because it was the only place I could find that allowed an extended path with no traffic lights.

These days, I live in the countryside, but on two or three days a week I will take a walk with friends, or by myself. We are lucky, here in the Berkshires, to have many entities that have preserved and protected large portions of our landscape. Although there is more work to be done to connect many of these properties to create wildlife and walking corridors, at least we seem to have evaded the fate that Thoreau feared:

At present, in this vicinity, the best part of the land is not private property; … and the walker enjoys comparative freedom. But possibly the day will come when it will be partitioned off…

Here, as promised, is the excerpt and link:

June 1862
by Henry David Thoreau

It requires a direct dispensation from Heaven to become a walker. You must be born into the family of the Walkers. Ambulator nascitur, non fit. Some of my townsmen, it is true, can remember and have described to me some walks which they took ten years ago, in which they were so blessed as to lose themselves for half an hour in the woods; but I know very well that they have confined themselves to the highway ever since, whatever pretensions they may make to belong to this select class …

The walking of which I speak has nothing in it akin to taking exercise, as it is called, as the sick take medicine at stated hours,—as the swinging of dumbbells or chairs; but is itself the enterprise and adventure of the day. If you would get exercise, go in search of the springs of life. Think of a man’s swinging dumbbells for his health, when those springs are bubbling up in far-off pastures unsought by him!

Moreover, you must walk like a camel, which is said to be the only beast which ruminates when walking. When a traveller asked Wordsworth’s servant to show him her master’s study, she answered, “Here is his library, but his study is out of doors” …

I can easily walk ten, fifteen, twenty, any number of miles, commencing at my own door, without going by any house, without crossing a road except where the fox and the mink do: first along by the river, and then the brook, and then the meadow and the wood-side. There are square miles in my vicinity which have no inhabitant. From many a hill I can see civilization and the abodes of man afar. The farmers and their works are scarcely more obvious than woodchucks and their burrows. Man and his affairs, church and state and school, trade and commerce, and manufactures and agriculture, even politics, the most alarming of them all,—I am pleased to see how little space they occupy in the landscape …

At present, in this vicinity, the best part of the land is not private property; the landscape is not owned, and the walker enjoys comparative freedom. But possibly the day will come when it will be partitioned off into so-called pleasure-grounds, in which a few will take a narrow and exclusive pleasure only,—when fences shall be multiplied, and man-traps and other engines invented to confine men to the public road, and walking over the surface of God’s earth shall be construed to mean trespassing on some gentleman’s grounds. To enjoy a thing exclusively is commonly to exclude yourself from the true enjoyment of it. Let us improve our opportunities, then, before the evil days come.

Volume 9, No. 56, pp. 657–674

Read the full article here.

Shantih shantih shanti

#Pantsuit Nation

Today is Election Day.

As the hashtags #Pantsuit and #Nation sweep the internet, I am reminded of a day many years ago when I made a momentous decision involving a pantsuit.

I know, I know! Hard to imagine a pantsuit being involved in an earthshaking moment, but this one was.

In the late 1960s, I was a supervisor and trainer for the newly-minted college graduates who came to work for the insurance company in Hartford where I had been employed, originally as a computer programmer. In those days, not too many people knew what a computer was, let alone what a programmer did. The field then was called “data processing” and that’s what it was — the insurance company had set out to replace the sea of clerks, who wrote on and filed index cards, with computers that could process and file premium payments, as well as claims. The company also was awarded the contract to process all transactions in Connecticut for the new Medicare system.

I was born in 1946, and my age cohort was on the cusp of the change in societal attitudes around the roles of women. I was an ardent feminist (although I don’t think I used that word in those days), and an advocate for equal treatment of women in the workplace. Many, if not most people younger than I shared that attitude, but few people older than I did.

My management style seemed to suit people who felt isolated and left out of the mainstream. As a result, I was often assigned oddball or problem employees. With the insight I’ve acquired over the years, I believe that I was sympathetic, even empathetic, to people of color, folks with dodgy pasts, gays, women, and other mistreated people because (unbeknownst to me at the time) I am autistic, and had experienced the same kind of isolation and misunderstanding.

Many of the new hires who reported to me were women, which was unusual because they were coming in at a fairly high level. Of the 2,000 or so people who worked in the building, most were women, but almost all of them were doing clerical jobs. I think there were about 200 officers of the company, and only two of them were women.

One day in 1969, one of the young women I supervised came to me with a question. Cheryl was a bright, eager, recent college graduate, married, and I held her in high regard.

“Would it be all right if I wore a pantsuit to work?” she asked.

I was taken aback at the idea of a woman wearing pants to work, not because I opposed the idea, but because it just wasn’t done.

“What do you mean, ‘pantsuit’?” I cautiously inquired.

“Well, I have this nice beige polyester outfit that looks very businesslike, but it is a jacket and pants, not a skirt.”

“I see. And do you wear a blouse under the jacket?”

“Yes, a white blouse. I think it looks very professional, but I’ve never seen any woman wear pants here, so I thought I’d ask if it’s okay.”

I hesitated. I wondered if this was my decision to make. Perhaps I should go up the chain of command, or call the personnel department. Then the rebel in me took over. The hell with it, I thought, it’s a great idea. Shake ’em up a bit. One more step toward equality.

“I don’t see a problem with that!” I said. Of course, I did, but I was willing to live with the consequences.

Cheryl beamed. “Okay, thanks!” she said, and I put the matter out of my mind.

The next day she arrived at work in her pantsuit. I don’t remember thinking it was the cat’s meow, but it was as she described, very tame and businesslike. To me. But oh! the firestorm!

Shortly after the workday began, Cheryl came over to my desk and sat down. “What do you think?” she asked.

“Looks nice.”

“I’m getting a lot of funny looks.”

“Don’t worry about it!” I reassured her. “The outfit is just the way you told me it would be. I don’t see anything wrong with it.”

It didn’t take long. I soon got a call from someone in personnel, asking me if I had approved the outfit that was causing such a stir. Yes, I confirmed, I did — why? was there a problem with it?

“We’re not sure. We don’t have a policy about that, but it seems we may need to create one to address all the complaints we’ve been getting.”

There followed a fairly long conversation in which I vigorously defended Cheryl’s right to wear pants. Hard to believe in this day and age, but at the time it was a big deal. Several hours later I got a call back. “We’re not going to make an issue of it, and we decided not to have a formal policy, but it’s okay. Women can wear pantsuits.”

I went over and told Cheryl, and all her friends within earshot gave her a big cheer.

The Revolution had begun!