Sep 03


For the past few (4+) years, I have been a “Georger” — a person who uses the website to track the movement of US currency that passes through my hands.

I’ll have a lot more to say about this in future posts, I just wanted to start the conversation here with an observation about an article that appeared in the latest (September 4, 2017) New Yorker.

One of the things that fascinates me about the WheresGeorge hobby (and, yes, there is a connection with my thoughts about the autistic personality) is learning facts about currency production and circulation. There is a tie-in, too, with my lifelong interest in money as a concept — how it works, how it was invented, and other aspects of “the coin of the realm.” My career, after all, was in finance, and I have degrees in Economics.

As an aside, the coin of the realm, in the original phrase, was the English penny. We Americans often talk as though we have a penny, but in fact what we have is a one-cent piece. There has been much speculation about phasing out the Lincoln cent (in production since 1909), since it is often treated as more of a nuisance than a piece of value. Back in the day when the penny was indeed the coin of the realm, it was so valuable that it was not even the smallest coin issued. That honor belonged to the farthing, which was worth a quarter of a penny (the word itself is a corruption of “fourthing”), and Ceylon (which at the time was part of the British Empire) issued a coin worth half a farthing. But I digress.

The New Yorker article, by Adam Davidson, is titled “Smart Money” in the print edition, and “How the Dollar Stays Dominant” online. It starts out with a description of Crane Currency (aka Crane & Company or Crane Paper), which makes all the paper used in US currency, and is located in the same Massachusetts County where I was born and now live. Berkshire County, as I understand it, was where American papermaking got its start. Prior to 1844, paper was made from cloth rags. In that year, two different inventors, one in Canada and one in Germany, came up with the idea of using wood.

Rags were probably in fairly short supply in the Berkshires, but there were plenty of trees. Despite the demand for wood in the charcoaling (ironmaking) industry, and the need for lumber (tall white pines were highly prized as ships’ masts), trees were mostly a nuisance to farmers who needed clearcut land.

When I was growing up, in the 1950s, there were many mills along the Housatonic River engaged in the manufacture of paper, though by then most of the raw material was imported from farther north.

My father worked, for most of my childhood, at Eaton Paper Company, in Pittsfield. He was the foreman of the cutting room, where they produced envelopes and stationery. Our house was filled with scraps of paper he brought home — remnants of the cutting process; a multicolored collection of odd shapes left over after trimming large sheets. Eaton Paper did not manufacture paper, but took the local product and packaged it for end users.

We lived in Stockbridge, never far from the Housatonic River. As kids, we would often play down by the river, oblivious to the PCBs that flowed from the GE plant in Pittsfield. We always knew what color of paper was being produced that day, because the river would flow by in the various hues of the rainbow; pink one day, green the next. My mother (wisely) told us to stay out of the water (which also contained more than a little sewage), although she recounted her memories of the days in her youth when the river ran clear and clean, and she could swim in it.

Davidson’s article points out that Crane still uses cotton and linen to make the paper for our currency. This is just one of the features that makes it difficult to counterfeit the various denominations of dollar bills. The article highlights other technologies now in use:

The hundred-dollar bill, for example, is embedded with a micro-optic security ribbon—a blue line, next to Benjamin Franklin’s face, patterned with alternating images of the Liberty Bell and the number “100” which, when the bill is tilted, move up and down, left and right.

I remember the first time I saw one of these elaborately-designed bills, several months before they first went into production in February of 2010. It was at a political fundraiser in Pittsfield (the city next door to Dalton, where Crane is located). An employee of Crane had obtained permission to bring a prototype to the gathering, and it was enclosed in a heavy plastic sleeve on a chain that was handcuffed to his wrist. He was proud to show it around, but no one was allowed to touch even the plastic container. It struck me at the time that this kind of “security” was a bit of overkill amidst a group of people who were unlikely to become counterfeiters.

As a matter of fact, as the article makes clear, counterfeiting is not much of a problem at all. In a typical year, the Secret Service finds only a few million dollars’ worth of fake bills, not even a nat on the elephant of the trillion dollars that is in circulation.

Larry Felix, the former director of the Bureau of Engraving and Printing, told me that anti-counterfeit measures “don’t make much sense from a direct financial perspective,” since the cost of preventing counterfeiting is much greater than the infinitesimal loss caused by fake bills. But these measures have a broader, psychological purpose. “Banknotes depend on confidence,” Felix told me. (Our paper bills are called banknotes because they are, technically, promissory notes—formal I.O.U.s—issued by the Federal Reserve.) “You accept a banknote because you figure the person you will hand it to will also accept it.” This is the essential circular mystery of money: its value comes from each of us believing that everybody else will continue to believe in its value. The physical bill reinforces this bit of theatre, with the feel of the cotton-and-linen paper reminding us that dollars are long-trusted, and the ever-upgraded magical effects reassuring us that they will hold value far into the future.

In the Georging world, there is much discussion about counterfeiting (and how silly it is for cashiers to examine small-denomination bills with the bleach pen that is designed to detect wood-based paper). There is also a lot of worry about how debit cards are undermining the hobby because people will be using less cash than they used to. So far, though, that doesn’t seem to be a problem, since the Treasury prints more banknotes every year, even adjusted for things like inflation and population growth.

The tie-in with autism will be obvious to those who know how detail-oriented we autistics are. I’m not suggesting that everyone who enjoys the hobby is autistic, but Georging provides a playground for those who thrill at the likes of finding unusual patterns in serial numbers, or collecting counties in the US where their bills have been found. And, perhaps most importantly for many, Georging provides a wonderful social aspect: no need for small talk, no need to even meet the people you are dealing with. Although we do have many in-person Gatherings.

All of this deserves more commentary, as I continue my search to define what it means to be autistic. Stay tuned.

Jul 24

Swarm Intelligence: I

I’ve been meaning for some time to write about swarm intelligence.

The basic insights have been around for at least 30 years, and, since then, the ideas have percolated into awareness and are now appreciated outside the scientific community.

One excellent summary can be found in Len Fisher’s 2009 book, The Perfect Swarm. I read this a while ago, and took notes, meaning to share my thoughts here, so stay tuned!

I am interested in many of the ideas associated with the study of collective behavior, because they tie in with other concepts that I find fascinating. A key theme in this work is self-organization, which has been used to explain subjects as diverse as crystal growth, sand piles, evolution, and human social organizations.

And, oddly enough, gambling. Or, perhaps, more kindly put, investing. I’ve been interested in both aspects of risk-taking for about as long as I can remember, and they are identical in one important aspect: trying to understand the risk associated with an uncertain outcome, and deciding if the price on offer is a fair one.

A couple of years before Fisher’s book was published, the National Geographic Magazine published an article entitled Swarm Theory that outlined the basic ideas in the field. They did get one thing terribly wrong, however.

It appears they may have lifted this example from The Wisdom of Crowds, by James Surowiecki (which I have not read, presuming that I understood all the points he made in the book, from what I’ve read about it). If so, then they repeated his error, but wrong is wrong…

My friend Bill Ziemba studied (back in the 1980s) racetrack betting, with an eye to discovering whether parimutuel odds were unbiased in their prediction of a horse’s chances of winning a race. He found, perhaps surprisingly, that they were. I read his book, and he sent me several academic articles (both published and unpublished) on the topic, as well as some large data sets and computer code. One thing he had discovered was that, although win odds were efficient (i.e. unbiased), show odds were not; and therein lay a profit opportunity.

We’ve not been in touch for some time, but I do notice on his website (the link just given) that he is in the process of updating his work, this time for exotic bets. I’ll be interested in seeing his results.

It may be that Surowiecki used Ziemba’s work in describing the behavior of horse race bettors, but in any case, the National Geographic description was dead wrong.

Why are [bettors at a horse race] so accurate in predicting the outcome of a race? At the moment the horses leave the starting gate, the odds posted on the pari-mutuel board, which are calculated from all bets put down, almost always predict the race’s outcome: Horses with the lowest odds normally finish first, those with the second lowest odds finish second, and so on.


This is not what Ziemba found, and the statement contradicts simple statistical evidence, as well as anecdotal observation by anyone who has ever paid attention at a race track.

The favorite, or “chalk” in each race (the horse with the lowest odds) will win less than half the time. They do not “normally finish first”! The odds against the outcome described in the quote here are long indeed; in all the years I’ve been betting on horses, I don’t remember it happening even once, except perhaps in a race with very few runners.

What Ziemba found, as stated above, is that parimutuel odds are “efficient” — which simply means that you cannot use the odds as a single piece of information to make money. A fair coin has 1 to 1 odds of coming up heads when it is flipped. These are efficient (fair) odds, and you cannot make money by betting for heads or tails, over time. You might win several times in a row, but over enough trials, you will break even.

Similarly, a horse that goes off at 10-1 might win, and you would get back your original bet and an additional sum of 10 times what you bet. But if you bet on 10-1 horses over a long string of races, you should expect them to win only once in every 11 bets, and you would break even. (Except, of course, that the track deducts a fee from the parimutuel pool before computing the odds, so you will be out that fee of 15% or whatever is being deducted.)

Such are the complexities of emergent behavior arising from complex, self-organizing systems. Care must be taken not to view the system as having a central intelligence. Adam Smith had it right when he said there was an “invisible hand” at work, although his conclusions were not always correct interpretations. When people talk about “the market” they are describing the properties of an emergent behavior arising from the actions of individuals. A subject for another essay.

Jun 07

Anxiety: Who? Me?

I’m participating in a panel on Friday (June 9) at Northeastern University on Anxiety, as part of the CHATTER event.

Of course, I’m quite nervous about this.

As part of my preparation (which helps to reduce anxiety), I have been reviewing my past talks and writings on the subject. I discovered that my blog does not contain a copy of a talk I gave in 2010 on the subject. So, here it is! Although I would change some of the wording if I were to rewrite it, the thoughts expressed are still pretty much on the mark.

In addition, here are a couple of my blog entries that are relevant to the topic of anxiety:

  1. The Healing Power of Depression (2013)
  2. Growing Old Disgracefully (2014)

One common topic that comes up in these writings is medication. I’ll plan to say some words about that at the event on Friday. Related topics are mindfulness and alcohol use. Given that I’ll have only a few minutes to speak before we launch into an open conversation, I’ll have to think of some pithy things to say.

Mar 04

The State of the World is on My Mind

For many years, most of my blogging was about politics. In the period from early 2005 until late in 2008, I devoted my energies to political organizing, and I wrote nearly every day about opportunities for people in my region (the western four counties of Massachusetts) to participate in activities in support of causes and candidates I believed in.

In recent times, following my newly-acquired awareness (about ten years ago) that I am autistic, I have devoted my energies more to autism advocacy, and a lot of what I’ve written here has to do with disability rights and my understanding of the hidden role that autism has played in my life.

These topics, of course, are not mutually exclusive, and I have been involved in legislative advocacy to further the cause of state-supported programs to aid autistic people and their families and caregivers.

And I’ve also written about my pets, my hiking, my friends, and any other topic that seemed worth sharing. And I certainly will be doing much more writing about all of those things.

Today, though, I want to comment on an article I read, called to my attention by my friend Steve Silberman, about a deeply troubling look into the current state of affairs in just one important federal agency: the Department of State.

The article is

The State of Trump’s State Department
Anxiety and listless days as a foreign-policy bureaucracy confronts the possibility of radical change
by Julia Ioffe, March 1, 2017

I don’t need to comment on the distressing anecdotes in the story; you can read the article for yourself. What I want to add to the account of this agency adrift is my impression that, I’m guessing, this represents what is also going on elsewhere in the federal bureaucracy. The EPA seems to be in disarray; NOAA is under attack in terms of budget cuts; this list goes on.

I’m not exactly sure what the current misadministration in Washington means by “America First!” but it is probable that history can provide some guidance here. There have been other isolationist movements in this country, and, for the most part, they have not gained the upper hand. The pattern revealed in Ioffe’s article is one of disengagement from the world, which would be consistent with an isolationist philosophy, as I understand it.

I’m not sure how this puts “America First” since it seems to me that the career diplomats in State have devoted their energies to doing just that. By developing and maintaining networks of communication with their counterparts around the world, this cadre of civil servants has kept America’s interests at the forefront of awareness in every diplomatic capital on the globe.

It is disheartening to read that these networks are now not being supported. Again, I don’t need to go into detail; the article is clear about the current lack of communication and support being given to the employees of State. As an example:

According to the other people I spoke to, though, Tillerson seems cut off not just from the White House, but from the State Department. “The guidance from Tillerson has been, the less paper the better,” said the State Department staffer. “Voluntary papers are not exactly encouraged, so not much information is coming up to him. And nothing is flowing down from him to us. That, plus the absence of undersecretaries and assistant secretaries means there’s no guidance to the troops so we’re just marking time and responding.”

Also, it’s hard to dismiss this disarray as being symptomatic of a new Administration trying to learn the ropes. This, in, my understanding, has never happened before. Each time, in recent memory, there has been a transfer of power, the functions of State have gone on without interruption. This time, in one example often cited, the daily press briefings have stopped. This has not happened before, at least in my lifetime. And I’m old.

The basic outlines of the federal budget to be proposed have been revealed. More spending on an already-bloated military budget, less spending on just about everything else. It’s not clear to me how buying more bombs is going to help resolve the inherently political conflicts that are going on around the world. More diplomacy might be a good thing, not less.

Also, I thought about how easy it is to destroy things; how hard to create. All the networks/relationships that took years to cultivate will go out the window if people are made redundant, and it would take many more years to return to the same level of trust and communication. All the King’s horses and all the King’s men…

Those of us who have been predicting the end of the American Empire probably did not think it would end this way. I know I didn’t. Wearing my economist’s hat, I envisioned us being gradually overtaken by other economies that have been spending less of their money on the hardware of destruction and more on basic scientific research and economic infrastructure. It seems now that the process will not be so gradual, but will be hastened by the intentional actions of a cabal of “America First”ers who will undermine the very institutions that made this country great.

I fear that America will no longer be a desirable tourist destination, nor will it be the place to receive a world-class education. It will become an unwelcoming place, an unnecessary detour on the path to success. I hope I’m wrong, and I hope thing turn around. Soon. At the moment, I am not optimistic. Ah, but it could be I’m not the only one who feels this way! Perhaps the people will rise up! That, my friends, is a topic for another essay.

Meanwhile, I leave you with this sobering thought from the Atlantic article about the current state of affairs. A situation we hope to change, of course, but for the moment

“This is probably what it felt like to be a British foreign service officer after World War II, when you realize, no, the sun actually does set on your empire.”

Feb 27

The Language of Autism: “Special Interest” as a Stigmatizing Phrase

When an a non-autistic person studies something deeply, it’s an “area of expertise,” and the acquisition of such expertise is considered a commendable accomplishment. When an autistic person studies something deeply, it’s a “special interest,” and it’s considered a symptom of pathology.

Nick Walker

Nick’s post on Facebook really hit home for me, because, not long ago, I had been involved in an exchange about this very topic.

It’s hard to express how infantilizing and degrading it feels to hear the phrase “special interest” in connection with autistic behavior. I’m now in my eighth decade on this planet, so the term is not often applied to me; nevertheless, I cringe whenever I hear it. And I recently had an experience in which I was asked to describe my “special interests” — I can’t even…

I received a query from a local professional organization as to whether I would be willing to do an interview with a writer for a prominent national magazine. My understanding was that the reporter was interested in gathering information from autistic people with a variety of interests.

In an email to the writer, I expressed my willingness to be interviewed, and I provided quite a bit of background information about myself, hoping that would make the interview process go more quickly and smoothly.

In response, I received a form letter, with 5 questions. The first one was “tell me about yourself” and included items that I had already fully answered. The other 4 questions were about my purported “special interests.”

Naturally (as you will surmise, I suspect, if you are autistic), this got my dander up. Instead of answering the questions, I sent back a short essay about why I find the term “special interests” to be objectionable. I’ll summarize some of the points here, and expand on others. This list is not the response I gave, but it contains the essence of my essay. Some things to think about. In all of this, please keep in mind that I know that a lot of what I say is speculative, and I realize that I speak only for myself, not for anyone else, let alone autistic people as a group. We are just as varied among ourselves as neurotypical people are.

That said, here are my thoughts:

  • What is it about an autistic person’s interests that make them “special”? Why are not the interests of neurotypical people “special”? What do you call an interest that is not “special”?
  • The request struck me as asking me to make fun of myself. As if having a deep interest in something was in fact odd, and amusing, much as one would humor or demean a person who collects worthless objects of some kind. All of this brought to mind the phrase coined by Jim Sinclair, an early pioneer of autism self-advocacy.
  • We are not “self-narrating zoo exhibits

  • If a young autistic person, such as my friend Tim Page, has a deep interest in music, is that a “special” interest? What do you call it when his knowledge of and love for music leads him into a career as a music critic, for which he is awarded a Pulitzer Prize?
  • What do you call the interest that a kid had in ants that led him to spend all of his free time exploring the woods, turning over rotten logs, and studying ant colonies? What do you call it when he becomes a professor at Harvard and the world’s foremost authority on ants? And one of the world’s foremost authorities on evolutionary biology and environmentalism? I’m talking, of course, about Edward O. Wilson. I don’t know if he’s autistic, although I’d guess so, but does it really matter? He is certainly one of my heroes, autistic or not. I have only met him once, and did not have enough time with him to form an opinion of his neurological status, but I do find it telling that he was the one to solve the social insect problem that plagued Darwin and caused him to delay publication of his theory of natural selection. And I’m quite sure that Darwin must have been autistic. Another likely autistic was a famous person born on the same day as Darwin. Abraham Lincoln‘s self-description is about as good an explanation of how the autistic brain works as I’ve ever seen.
  • I am slow to learn and slow to forget that which I have learned. My mind is like a piece of steel; very hard to scratch anything on it and almost impossible after you get it there to rub it out.

  • What do you call an interest in baseball that is so intense that it leads a person to neglect personal relationships? What do you call it when that interest becomes an obsession with being the best hitter of all time? And what do you call it when he makes that happen? Again, I don’t know if Ted Williams was autistic, but all the signs are there. Again, I could ask, does it matter? But maybe it does. Maybe the question is, could this have been achieved by a person who was not autistic?
  • What about Tesla, and Jefferson, and Einstein, and countless other (presumed) autistic people whose interests led to wonderful discoveries? Many pioneers in mathematics likely were autistic. Newton and Turing, to name the obvious. I’m not sure if the brilliant Pascal was autistic, but his interest in a gambling problem led to the groundwork upon which probability theory (statistics) was built.
  • What, in general, was the role of autistic people in history? We are known for our ability to recognize patterns (and deviations from patterns). My friend John Robison has speculated that autistics might have made up the priestly castes in ancient cultures; the ones who noticed the patterns of the sky and the seasons, and invented calendars and astronomy. They kept the records of floods and growing seasons, and made civilization possible. More recently, he has written about the possibility that early Pacific Ocean navigators were autistic.
  • The list goes on. My Hall of Fame grows.

Much of this is speculative, of course. Not every innovative person is autistic. And not every autistic person produces world-changing discoveries. Many autistic people who have focused interests may pursue them simply because they are satisfying in some way. Some may produce innovations that benefit only themselves or a few people around them. Some neurotypical people may have obsessive, focused interests that rival those of the most intense autistic people. Some autistic people may not have any obvious such interests. The world is as varied as the number of people in it. And thank goodness for that!

Please drop the modifier “special” when talking about the interests of autistic people. We know we’re different. We are aware of that; it is an awareness that is deep in our souls, from our earliest days of self-awareness, if my own experience is any guide. We want to be proud of our achievements, based on what we accomplish. Not as autistic people, but as people. Thank you.

Feb 13

Another Candidate for my Autism Hall of Fame: John Couch Adams

Isaac Newton (1642-1726) is often mentioned (and rightly so, from what I can tell) as having probably been autistic.

Now, I learn of a later-day (1819-1892) kindred spirit.

John Couch Adams is known to history as having been hot on the trail of the discovery of Neptune, only to be beaten to the punch in 1846 by Urbain Jean-Joseph Le Verrier, who produced similar (and more accurate) calculations of its presumed orbit, enabling astronomer Johann Gottfried Galle to definitively identify the planet for the first time.

Neptune had been seen and noted by other astronomers, including Galileo (in 1612), but they had not recognized it as a planet. Over the years, irregularities in the orbit of Uranus had led to speculation that there might be another, more distant, planet whose gravitational pull was influencing the slightly erratic pathway of Uranus through the heavens..

Adams evidently had investigated this problem, and had either figured out the orbit of the unknown planet, or knew how to do so. He had presented his preliminary findings to the British Astronomer Royal, George Biddell Airy. Neither man actively pursued an empirical investigation, so the honors of the discovery of Neptune went to Le Verrier and Galle.


  • forgive the links to wikipedia: I’m aware this can be an unreliable and biased source at times, but there is also a plethora of external sources for well-known stories such as these, for those who wish to read more
  • to see a full-sized copy of the graphic below, click on the image, and then use the “back” function in your browser to return to this post

In my reading about this story, the article that caught my eye with respect to autism was one that appeared in Scientific American in December 2004, not long after some missing papers were found (in 1998) that filled in gaps in the historical record. It’s a good article, but it is unfortunately behind a paywall. The article is here for those who have access to it, and here is a graphic that summarizes the discussion.

The overview of the article states

  • The early 19th century had its own version of today’s dark matter problem: the planet Uranus was drifting off course. The mystery was solved in 1846, when observers, guided by theorists, discovered Neptune. Its gravity could account for Uranus’s wayward orbit.
  • Historians have traditionally apportioned credit between a French theorist, Urbain Jean Joseph Le Verrier, and an English one, John Couch Adams. Le Verrier’s role is undisputed, and so was Adams’s–until the mid-20th century.
  • Just as more historians were beginning to reexamine Adams’s role, a sheaf of crucial documents went missing from a British archive. It surfaced in Chile in 1998. The authors came across other crucial documents this past summer.
  • The bottom line is that Adams did some interesting calculations but deserves no credit for the discovery.

In writing about Adams, the authors say

His life paralleled that of Isaac Newton in some respects. Both grew up in the English countryside — Newton as the son of an illiterate yeoman in Lancashire, Adams as the son of a sharecropper in Cornwall. Both were interested from an early age in the regularities of mathematics and natural phenomena; both drove pegs or cut notches in window ledges or walls to mark the seasonal motion of the sun. They had similar idiosyncrasies: sobriety, fastidiousness, religious scrupulosity. Contemporaries viewed them as dreamy, eccentric and absentminded. Both Newton and Adams would probably be regarded today [2004] as having Asperger’s syndrome, sometimes known as high-intelligence autism.

Other anecdotes in the article also paint Adams as the typical “absentminded professor” that Hans Asperger himself had mentioned as an example of how autism is not rare, but can be seen all around if one knows what to look for.

Adams arrived to take up his studies at Cambridge in 1839.

His landlady said she “sometimes found him lying on the sofa, with neither books nor papers near him; but not infrequently … when she wanted to speak to him, the only way to attract his attention, was to go up to him and touch his shoulder; calling to him was no use.”

Oh, how familiar that is to me! I can’t say how many times I was told I had been addressed, and made no acknowledgement of having been spoken to, although I seemed alert. Sometimes I would remember the incident, but often I did not, my mind having been occupied with some problem or thought that excluded the outside world from interfering.

Elsewhere, the article describes Adams as a retiring, self-effacing man, who perhaps could have earned credit for discovering Neptune had it not been for his tendency to procrastinate and his dislike of writing. Again, I can relate to these characteristics, as well as to the description given toward the end of the article:

 A discovery does not consist merely of launching a tentative exploration of an interesting problem and producing some calculations; it also involves realizing that one has made a discovery and conveying it effectively to the scientific world. Discover thus has a public as well as a private side. Adams accomplished only half of this two-part task. Ironically, the very personal qualities that gave Le Verrier the edge in making the discover — his brashness and abrasiveness, as opposed to Adam’s shyness and naïveté — worked against him in the postdiscovery spin-doctoring. The British scientific establishment closed ranks behind Adams, whereas Le Verrier was unpopular among his colleagues.”

In my career as a financial analyst, I always felt if I did a good job and produced superior results, my work would be recognized and rewarded. To a certain extent that was true, but I can see now that a good measure of my success was aided by people around me who were willing to publicize my achievements.

It’s possible that I could have done bigger and better things if I had been more of a Le Verrier type and less like Adams, but I was more than satisfied with what I had accomplished, and really did not crave any more attention. I can see now that this attitude did not necessarily lead to maximizing my economic value (which is okay by me). I really loved doing research and exploring new ideas. That was the thrill for me. Having solved one problem, I was ready to move on. But my employer on Wall Street had other priorities.

At one point in my career, I had been using a set of models and a method of analysis to ferret out values within the largest American companies whose stocks were traded by my firm. I wrote a weekly market commentary and provided detailed lists to my clients. At first it was fun, because I was still learning. After a time, though, I began to get bored, despite the dynamism of the marketplace. I wanted a new challenge.

I began to dabble in analyzing other markets; in Europe, Asia, and elsewhere. I published some (internal) research papers that were well-received, and I had visions of taking over the world (so to speak). My company had just purchased the largest historical database in existence, of financial and market price data, of thousands of companies around the world. But I knew I couldn’t do everything. It would be huge undertaking to tackle this new challenge.

I went to my boss and told him of my desire to undertake this project. I had confidence in the team that worked for me, and I knew that I could figure out how to create a great value-added product. I explained that in order to do that, I’d have to give up the work that I was already doing, because that was a full-time job in itself, but that was okay with me because I felt a need to move on.

It would be another 15 years or more before I would become aware of my own autism. Perhaps with an expanded understanding of how I fit into the world (or don’t), I would have recognized my own naïveté. Peter listened attentively, and he was very supportive. “Sure, you can do that, it sounds great! Just one thing, though. You have to keep doing what you’re doing now. Our clients love you!” I was trapped by my own success!

I can also relate to Adams’s reluctance to push himself on to others. In explaining why he had failed to convince his colleagues at Cambridge to pursue a search for the mysterious planet, he admitted to not taking the time to fully explain his ideas.

I could not expect however that practical astronomers, who were already fully occupied with important labours, would feel as much confidence in the results of my investigations, as I myself did.

This brings to mind a conversation I had with a couple of brothers in a family that appeared to me to have plenty of autism running through all three generations I knew. “We’re not very good self-promoters!” one of them said as we discussed the challenges of setting up a new business. His brother laughed and nodded.

I never got to do that international project. A few months later, I left that firm to take a job in a new city, where I was offered a chance to take on new challenges. I had to take a cut in pay, but I had my priorities.

Adams was honored in many ways during his lifetime. He is said to have turned down a knighthood that was offered by Queen Victoria. In 1881, he was offered the position of Astronomer Royal, but he preferred to pursue his teaching and research in Cambridge.

Feb 11

I Was Ahead of My Time (But I Knew That!)


  • There is a new style of AI (artificial intelligence) that has, in recent years, taken the world by storm. Called Deep Learning, this approach has made possible self-driving cars, enhanced voice and image recognition, and audible translation from one language to another; to name just a few breakthroughs. Most of what we observe being done in such areas would have been only the stuff of science fiction as recently as 15 or 20 years ago.
  • Much of my career was spent creating computer models to analyze the world of finance and investments. For 30 years, beginning in the mid-1970s, I was involved in cutting-edge research that explored ways to control risk, identify arbitrage opportunities, and find value-added investments for the money managers, pension funds, governments, and foundations who were my clients.
  • For most of that period, AI was only a curiosity; a field in which the ideas were bigger than the capacity of computers to carry them out. If you saw the movie The Imitation Game, you may remember the scene in which the computer took more than 24 hours to solve the Nazi code, which changed daily. A few years later, a famous mathematician came to UMass in Amherst for a guest lecture, and announced that he was now able to forecast tomorrow’s weather. The catch was, he admitted, that the calculations were so complex it would take him a month to do it. But eventually, these problems became tractable, and computers “learned” to play chess and do other amusing things in real time.
  • In the latter phase of my computer modeling days, I played with primitive forms of AI, such as neural networks, and feedback models. I found these approaches to be useful additions to the work I was doing, although I realized I could not do all that I would like because of capacity constraints (on my time and in computer power).
  • I’m delighted to find that the things I could only imagine 15 years ago are now being implemented on a large scale. I take no credit for promulgating these ideas; they were the next logical step in the technology of the time. I only hope that these recent rapid advances pressage solutions to some of our most vexing problems. I’m optimistic because I don’t like to think of the cost of failure.

The Old Days

I worked on Wall Street in the 1980s for Morgan Stanley, one of the most technologically advanced companies of the day. Indeed, many of my early projects involved creating automated trading systems to execute sophisticated strategies that simply could not be done efficiently by a human trader.

We had at our disposal IBM’s most powerful and sophisticated mainframe computers, and we coded in APL, a language well-suited to the large data matrices that we needed to handle. Some of the models I created would bring these computers to their knees, so it was clear we were working at the frontiers of this kind of analysis.

When a technical problem arose, I had a number I would call for assistance. The person at the other end would always answer, “VR, how can I help you?” and I would explain my situation. One day, after I’d been working there a while, I asked a co-worker, “What does ‘VR’ stand for?” Oh, I was told, that stands for “voice recognition.” It seems that there had been an ambitious effort to save the traders from writing a ticket for each trade. Instead, they would call a phone number and recite the terms of the trade and it would be recorded directly on the computer system. The tests went well, and all the traders were trained, and they threw away their paper pads. But in real life, it was a disaster! Quickly, a fall-back plan was implemented. Instead of the computer answering the phone call, an actual person would take the call and write up the ticket, then enter it into the computer system. “Voice recognition,” to be sure — the old-fashioned way.

Machines Who Learn

There was an article in the June 2016 issue of Scientific American by this title, linked to (but behind a pay wall) in this write-up. The editorial choice of the pronoun is telling. Machines have begun to think like people. Instead of being pre-programmed to anticipate every possible situation (an impossibility), they are now made to learn from experience, as humans and other living beings do.

The article points out that the advances in AI have been made possible by improvements on two fronts; hardware and software. The hardware of 30 years ago was unable to cope with the demands of even (by today’s standards) simple systems. Among other innovations, there has been (at least) a 10-fold increase in computing speed

thanks to the graphics-processing units initially designed for video games…

On the software side, the early primitive neural networks have moved away from a static, linear model to more sophisticated feedback algorithms.

Reading about this brought back fond memories for me, of my days playing with early versions of these approaches. By the time these models had come into experimental use, I had gone off on my own, and had neither the resources nor the time to pursue them in the ways I could envision them being used. I was too busy earning a living, which involved a lot of expert witness work and real-world forecasting. I wish there had been more time to do pure research, although I did play around enough with the ideas, in what little time I could spare, to derive some insights that helped with the services I offered. That analysis included models I used to forecast foreign exchange rates, among other things.

The following graphic from the SciAm article brought me chuckles of recognition. Recognition of the natural, human variety, that is, not the kind mentioned here:

The basic structure shown here is identical in concept to the primitive neural networks that I created 15 or 20 years ago. Choices are made as to how many input nodes there are to be. This would correspond to information thought to be relevant to the problem at hand. In the case of forecasting currency exchange rates, for example, these inputs might include interest rates, inflation rates, the price of oil, and other exchange rates, among many other variables.

An arbitrary number of “hidden layers” is chosen (guided by experience, to be sure), and historical data are fed into the algorithm, which produces a forecast that can then be compared with the actual historical outcome. The network is “trained” on a bunch of such data until its internal coefficients are deemed to have reached an “optimal” level for forecasting.

I’m being vague here because I don’t want to get bogged down in technical detail. Suffice it to say that I did not find this approach to be very useful. The obvious problems are that there may be relevant variables that had not been included in the data set, or that the historical period studied may not be representative of the world going forward. And the structure in those early days did not allow for the model, once in place, to learn from its experience.

These same criticisms, by the way, could have been leveled against the more conventional econometric models of the time. Those models did not have “hidden layers” but they were linear and static. They worked fine until they didn’t, when the world changed, or some catastrophic event occurred.

I felt a need for a more dynamic approach, and at the time I was learning about complexity theory.

[All of this talk about my work is good material for another post (so stay tuned); for now, just some hints!]

I became familiar with a 1976 paper by Robert May (now Lord May), a naturalist who studied animal population fluctuations. He analyzed the effects of (unrealistically) high growth rate assumptions in the logistic difference equation. This equation had been around for a long time (first published in 1845), and was used to predict the ups and downs of population density. May demonstrated that this equation, which had long been thought to be quite orderly, could in fact produce chaotic results. The thing that intrigued me about this equation was not so much its transition from the orderly regime into the chaotic (which was indeed fascinating), but that it had what I would come to call a “learning coefficient.”

This equation has a feedback component, because the prediction for the next period depends on an estimate of the population’s natural growth rate (assumed to be the same over time), and a measurement of how close the population size is to its maximum (the largest population its environment could sustain). Of course these values cannot be known with precision, but using historical observations as approximations will produce graphs (or maps) that look very much like what happens in nature.

What I took away from my study of this equation was the idea that there could be a built-in “error correction” that could operate dynamically. In the case of animal populations, the corrections were made by a natural response to the availability of resources. When a population is small relative to food supply, growth is rapid. As the size of the population nears its maximum, starvation will reduce the population back to the point that it can start growing again. In real life, things are more complex than this, because of predator/prey interactions — but I digress!

Referring back to the currency-forecasting model I mentioned; my complaint had been that its operation was too dependent on the data set that was arbitrarily chosen as being “typical” of what the future might bring. And it was difficult to know when the world might have changed enough to warrant re-estimating the model. But what if the model could learn from experience? I figured out a way to do this, and I’ll spare you the details, but my tests showed that my new approach was superior to the one I had been using (at least it was when tested on historical data).

So I went “live” with the new model, and incorporated it into the research and advice I was selling. It continued to work well. Mind you, in the world of economics and finance (a bit like weather, I guess), there is no such thing as an “accurate” forecast. The standard, rather, it whether a forecasting algorithm is better than others that are available. Being a little bit better than the competition is all that can be hoped for (and is rewarded) in the fast-paced world of financial markets.

I had dreams of going the next step, which would have involved letting the computer learn how to adjust other features of the model. Adding another layer, in other words, in which the algorithm would “teach” the components that were already learning, by observing their performance and adjusting how they made their adjustments.

I never got to do that, however, because, as the ancient saying goes, life is what happens while you are making other plans. For various reasons, I decided to exit the business, and never got to play with my next level of ideas.

Imagine my delight, then, when I saw the diagram above, and from the description in the article, realized that my vision was now a reality, and is what is driving this new wave of AI.

I’m not claiming any credit for promulgating these ideas, because I did not. They were an obvious outgrowth of the work that was going on 10 to 15 years ago, as I was winding down my involvement. At the time, as the SciAm article discusses, computing power was inadequate to solve many real-world problems. I was already working at the fringes of this line of computing, and it wasn’t easy. But it sure was fun. And it’s fun for me now to see where things have gone.

I have every reason to believe that in another 10 or 15 years, we will be witnessing new applications that are only a glimmer today. Perhaps we will even figure out how to ameliorate the vexing problems that threaten us today, such as overpopulation, deforestation, species loss, and climate change. I’d like to think we can do that, because I don’t want to contemplate the alternative.

Jan 28

The Good Nudge

Another Obama program that may or may not survive in the new Administration.

A recent (January 23, 2017) issue of The New Yorker contains an article (“Good Behavior“) that describes the final days of an Obama initiative to use behavioral science in the service of improving government performance.

The article focuses on the Flint water crisis, and mentions several other projects that have used this approach.

The story is a hopeful one, in the sense that there are possibilities for doing good with the proper application of what is called “choice architecture.” There is also a warning here, that such an approach is value-neutral, so can be used in a negative way, as well. Trump, Hitler, and Stalin are all cited as people who have used “the behavioral arts” to influence public opinion.

George Lakoff has lectured and written on this subject for many years, pointing out many of the same themes that are mentioned in this article.

Language matters; it reveals our values and it helps shape them, for better or for worse.

Jan 18

Look What I Got in the Mail

I was 15 years old.

My grandmother had sent me this notecard. I knew it was coming, because she had told me about it. “Someday,” she told me, “you will be very proud to have this.”

I didn’t have to wait; I was proud of it from the moment she told me about it. I was very close to my grandmother. For years, I often went to visit her and spend time with her in her office after my school day was done. She told me stories. She gave me things to read. She showed me things in her world, which was the Historical Room in the Stockbridge Library. I loved every bit of it.

So now, 55 years later, I am still proud of my heritage.

Jan 16

Let’s See if I Can Tape This All Together

What do cupcakes and chocolate have in common? I guess that’s pretty obvious, but Scotch Tape?

In September 2009 Scientific American devoted an entire issue to “Origins” and I’ve chosen three of my favorites to link together here.

First up: cupcakes: where and when were they invented, and whence the name?


If you click on the images in this post, you will see full-size (readable) versions, in case you care to look at the details. Then, click the “back” icon to return to reading the post.

The point of this “Origins” blurb is that the cupcake is probably an American invention, first noted in 1826, and was likely a variant of the British “pound cake.” I’m not much of a baker, so I used my mathematical propensities to come up with a likely explanation for the names of these two cakes. A pound cake, I reasoned, weighed a pound, and I remember my mother’s folk wisdom, which she drummed into me when I was young and learning about such things in the world, “A pint is a pound, the world round,” she would chant whenever I asked her how many ounces were in a cup or a pint or a quart. As it turns out, things are a lot more complicated than that, but my childhood understanding that a pint of water weighed about a pound, and both contained 16 ounces, made it easier for me to do conversions.

So, when I read that the American cupcake is a downsized version of the British pound cake, and being familiar with the traditional shape of the cupcake, I immediately fantasized that cupcakes must have been baked in small cups, unlike the pound cake, which must have been baked in pint-sized mugs.


In this miraculous Age of the Interwebs, I was able to discover that the origin of the name “pound cake” came from its simple proportions: one pound each of flour, butter, eggs, and sugar. This recipe became popular in the early 1700s, perhaps because it was an easy one to remember. A cake of any size, made with these same ingredients in equal proportions, is called a pound cake.

Although my intuition about the origin of the cupcake name seems to have more support among food historians than the explanation given in the SciAm version, there seems to be some disagreement as to when the name first appeared. Some sources have 1828, instead of the 1826 mentioned here. In any case, the 1796 date is often cited as the date of the first known recipe, published under the name “a light cake to bake in small cups” — a recipe which gives the lie to the idea that it is simply a smaller pound cake, both upon inspection and because the author gives a separate recipe for a pound cake. .

_A light Cake to bake in small cups_.

Half a pound sugar, half a pound butter, rubbed into two pounds flour,
one glass wine, one do rose water, two do. emptins, a nutmeg, cinnamon
and currants.

Evidently the “small cups” referred to any cups that happened to be available, not to the 8-ounce standard measure of a “cup” or “glass” that came to be used later. Metal baking trays came later still, and the paper holders we are familiar with did not come into widespread use until the 1950s.

Early cupcake recipes often followed the “1234” formula, which also makes it clear that they were not a smaller version of the pound cake. “Quarter cakes” these were sometimes called, not because of their size, but because of their four ingredients: 1 cup butter, 2 cups sugar, 3 cups flour, and 4 eggs.

In my excursions through the history of cakes, I noticed (not for the first time) that many older references used the word “receipts” in the same way we now use “recipe” — these words have related etymologies.

In the Middle Ages, a doctor’s instructions for taking a drug would begin with the Latin word recipe, literally, “take!” Recipe is a form used in commands of the verb recipere, meaning “to take” or “to receive.” The verb receive itself comes from Latin recipere, but through French—as does the word receipt, which was once commonly used to mean “recipe.” From its use as a name for a drug prescription, recipe extended its meaning to cover instructions for making other things we consume, such as prepared food.

The “recipe” in a drug prescription is now universally abbreviated to “Rx” — I imagine that most people don’t know what it stands for.

I wonder if my mother ever knew that the British Empire used the Imperial Pint of 20 ounces. There were still 8 pints in a gallon, making the Imperial Gallon 25% more voluminous than the American version. Also, as I discovered in my trips to Canada back in the day, 25% more expensive. MPG were better, though.

I trusted my mother’s keen sense of practicality when it came to dealing with life’s pragmatic challenges. I remember chatting with her one time in the early 1970s, about 10 years after I had left Stockbridge to move to Springfield. I can’t fix the exact date, but I know it was after the moon landing in 1969, and before she moved out of our old house in South Lee, eventually to become the first resident of the new housing project for the elderly in Stockbridge named Heaton Court, after a brief stay in an apartment near the end of Park Street, the street where we had lived in my early childhood.

There was no television set in our home for most of my growing-up years. It remained that way until my mother won a small black-and-white set in a charity raffle conducted by the Elm Street Market. I suspected that Mike Abdulla might have put in the fix for her, probably knowing that we were one of few families in town without a TV. But perhaps is was just a stroke of good luck. In any case, it became a fixture in the house, though in my teenage years I spent less and less time at home, so didn’t really watch it much.

On that day I remember, I looked at the TV set, and that set me to thinking about how our family had made a tardy move into that era. I used to watch TV at friends’ houses, although my mother placed a strict limit of 2 hours on Saturday and 1 hour per day during the week. I had to choose carefully. The first time I ever saw a TV show was in the Rinsma’s house on Yale Court. We were invited over to watch their new set. The screen was probably 8 or 10 inches on the diagonal, set inside a huge cabinet, and it was hard to see, what with all the people crowded into the living room. The show we were eager to see was live, as were most shows in those early days. Finally, the time arrived, and Ed Sullivan came beaming into our midst.

While pondering that, I remembered many of my mother’s stories of her youth. She had told me that when she lived on Hawthorne Road in Stockbridge in the late 1920s, there were still more people using a horse and buggy to get around than were driving automobiles. She also told me about the early days of radio, and of the trolley cars that used to ply the Berkshires. We would take the train from Stockbridge to Pittsfield once a year to visit Santa Claus at England Brothers Department Store on North Street. That was where I saw and rode, for the first time, an escalator and an elevator!

When I was young, and we were living on Park Street, we had an ice box. It was an exciting day when our first refrigerator was delivered. For years after that, though, my mother would refer to it as an ice box. “Mom!” we would object, “it’s not an ice box, it’s a fridge!” Similarly, she called aluminum foil “tinfoil” because that’s what it had been when she was growing up.

Breaking out of my reverie, I wanted to know what my mother thought of all those changes. “Mom,” I said, “you’ve seen a lot of new technology during your lifetime. You’ve seen automobiles come into common use, you’ve witnessed the advent of television, you know that I work with computers that didn’t exist just a few years ago, and now you’ve seen a man walk on the moon. This must all seem rather astounding to you. I’m just wondering, of all these marvels, and with all the other new things you’ve seen, which one would you say has made the most difference in your life?”

Without a moment’s hesitation, she answered

Yes, that’s right, she said, “Scotch Tape!”

I would never have guessed that. I could see how she might say mimeograph machines, or color film, or something fairly prosaic, given her penchant for down-to-earth results, but Scotch Tape?! That took the cake.

And we can take the cake into the realm of chocolate. My friends Joe and Roxanne have recently experimented with several dietary changes, and have rejected most of them, but have decided to stick with being gluten-free. Needless to say, this has become quite trendy of late, with an article I read not long ago asserting that one-third of Americans are trying to cut back on or eliminate gluten from their diet. I had no intention of being a trend-setter when I moved away from eating gluten about 15 years ago. I had been very sick for quite some time before I figured out the cause.

In my early days of being gluten-free, obtaining bread, pasta, and other basics was next to impossible. Mostly, they were available only in health-food stores, and what was on offer was often unappealing in terms of taste or texture. As more people have discovered that going to a gluten-free diet makes them feel better, demand has increased to the point that almost every restaurant has identified which items on the menu are gluten-free, and supermarkets have special gluten-free sections.

Joe had a milestone birthday last September, and Roxanne secretly made a chocolate cake for him to bring and share with our hiking group. The cake was a big hit, with everyone (me included) exclaiming, “This is gluten-free?” in disbelief.

Joe, being the cook of the family, has shared with me many tips on brands to try of bread, pancake mix, and the like. I’ve been a little hesitant to get into chocolate cake production, however, since I think substituting sugar for gluten is probably not the way to a healthy lifestyle. Chocolate, however, now that’s a different story! At a recent party at their house, chocolate cupcakes were on offer. I begged to be able to take one home, and I was presented with not one, but three, as well as some chocolate chip cookies.

This short piece extols the health benefits of chocolate. The cupcake version comes with a fair amount of sugar, I suppose (I don’t really want to know!). Everything in moderation, I’m told.

Also, I note,

Chocolate may also be good for the mind: a recent study in Norway found that elderly men consuming chocolate, wine, or tea — all flavonoid-rich foods — performed better on cognitive tests.

I don’t know what their definition of “elderly” was, but I’m not taking any chances! Excuse me; I need to go get another cup of coffee.

Older posts «