Notes to myself

An effort to extend the time between the recently learned and soon forgotten

July, 2015

Obesity and diabetes

Last month I attended the American Diabetes Association meeting in Boston. While my presence at this meeting was admittedly motivated mostly by the recent release of our software type2diabetesgenetics.org, it was also an opportunity to learn more about this terrible disease. Clearly the danger of this disease is on the increase for many different populations. I took a particularly alarming set of maps off of the CDC website and ran them together to create a short video graphic that illustrates obesity trends in the United States. Note that the rates of obesity in the US (~68%) are far from the world's worst. The small South Pacific island nation of Tonga is winning that race (86% of the population are overweight or obese), though many other countries are distressingly close to that number.

(Note: If you wish to compare obesity trends over time and region yourself then I would recommend the following interactive visualization from the Institute for Health Metrics and Evaluation. The health impacts of the phenomenon are deeply troubling, but from a software engineering perspective the interactive graphic itself is awesome!)

While obesity is clearly a driving cause of type 2 diabetes, the amount of fat on your body is not the only factor. For this reason the nations in Southeast Asia (including China and India) are those that are currently facing the gravest dangers from diabetes, even though obesity is less common there than in many other countries, including the United States. Other factors, at least partially genetic, cause increasing rates of obesity in southern Asia to lead to a disproportionately large increase in rates of diabetes. This disease is becoming an ever more critical danger to public health, especially as more people in these areas adopt a so-called 'Western' diet (high in meat, fat, and overly processed foods) and thereby become increasingly fatter.

The ADA conference also provided for me an interesting perspective on the business side of healthcare in the United States. In my capacity as a software engineer I have attended many technical conferences in the biotech realm, and the exhibit floor is typically occupied by companies that vend big-ticket instrumentation, along with other companies that set up booths to sell software and/or services. In the ADA conference, on the other hand, it was perfectly clear where the money came from, and it had little to do with instrumentation. Instead pharmaceutical firms constructed enormous booths, managed by smiling and attractive people who tossed out endless quantities of swag, and did their very best to at least get an email address from you, if not something more. I would love to include a picture of the expo floor, but of course pictures are strictly forbidden.

Pharmaceutical companies are not only buying floor space at conferences, of course, but are funding research add academic institutions, as well as at startup companies and big corporations. There is unfortunately an unavoidable conflict of interest for large drug companies, because in fact they make no profits at all from those of us who are well. To be fair they also make no profits off of those of us who are dead, so the goals of big pharma are not _entirely_ antithetical to our goals as a society (just mostly). These companies biggest profits come from the long term, chronically ill, and in particular from those so sick that they will pay any price and endure any side effects in order to try to feel better. Surely this profit motive helps to point the emphasis not on prevention, but instead on the amelioration of symptoms and easing of suffering. No one can oppose efforts to comfort the sick, of course, but in a world of finite resources it would be nice to believe that resources are being used to minimize sickness for the greatest number, regardless of whether the strategies assure a steady stream of chronically sick people or not.

Clash of curves

Writing over 200 years ago, Robert Malthus proposed an inevitable contradiction between two rates of growth. He pointed out that population growth is inherently an exponential process, while the available food supply can grow only geometrically. Growing populations would therefore inevitably run up against inadequate food supplies, leading to periods of starvation and population decrease, after which the cycle would begin again. Critics have since argued that Malthus failed to appreciate the influence of technology, and that ongoing technological improvement would allow us to forever stave off a 'Malthusian catastrophe'. And on the world scale it would seem that the technologists have, at least so far, been closer to the truth.

Exponential curves are unwieldy things, however, and the argument that technology can continually improve at an increasing rate seems difficult to justify. Taken at the grandest scale of human development, however, the argument of the technophiles seems so far to have been running according to plan.

The first stone tools produced by our australopithecine ancestors were pretty primitive things. These rock chips appear very similar to all the unintentionally broken rocks amongst which they have been unearthed in East Africa. These 'Oldowan tools' were created around 2.6 million years ago, and they represent the first engineering efforts of the creatures that would go on to become Homo erectus.

The next big technological development was the hand axe. These rocks represented an important technological improvement: they were distinctively shaped on one or both sides, and show evidence of being sharpened, and then later re-sharpened again, sometimes repeatedly. Some of the stones are not native to the area in which they were found, and thus were likely moved considerable distances. Taken together these implements were called the Acheulean toolkit, and they were a marked improvement over the Oldowan tools originally discovered in Olduvai Gorge. Interestingly, however, these hand axes were not steppingstones leading to immediate improvements. For roughly 1 million years these stone hand axes represented a technological plateau which served our ancestors with little improvement for a very long time.

Change did eventually come, of course, and the period since the Acheulean tool industry has been marked by ever increasing rapidity of change. Next came the middle Paleolithic starting around 250,000 years ago, with the first developments that might be considered ornamental or symbolic. By 50,000 years ago came the beginning of the upper Paleolithic, with an explosion of new technologies, including fishing, knife blades, and bone artifacts. By 30,000 years ago, in the late Stone Age, our ancestors had developed complex tools and rich symbolic cave art. From 10,000 years ago came the first development of agriculture, followed by cities, higher population densities, writing, and much else.

The rate of change continues to increase, going ever faster. When I first started working with computers professionally in 1985 it was common to spend a few years learning an operating system and language combination, and when I switched to a new job I would expect to spend 12 months learning the ropes before I became productive. Now I teach myself a couple of new languages each year, and a new person entering our group is expected to start contributing after only a few weeks on the job. And all this with the same size brain case (about 1300 ccs) owned by the early Homo erectus who were banging out those repetitive Acheulean hand axes.

If technological development has followed a roughly exponential curve, however, where does this take us in the long run? Can the same 1300 cubic centimeters continue to process new information ever more quickly, or will we reach a point where we can no longer increase the speed of innovation? The educational system, developed in post-Sputnik represents an attempt to optimize technological innovation, is finely tuned to separate the most capable children and to press them into hypercompetitive universities where they exercise their capabilities to the limit. Even the prodigies thus identified and trained cannot forever increase the speed of innovation, given the fundamental limitations of biological systems.

Looking ahead I can imagine only two outcomes. One possibility is that we better integrate digital technologies to aid us in our ability to think more efficiently. We can integrate search engines at a deep level as we think and work, and otherwise depend on computers to better organize the information we seek to process. I can't guess at the form of the long-term integration between human neurons and silicone-based chips, but there is doubtless the potential for considerable improvement. The second possibility is that we finally concede to the inevitability of an exponentially growing curve exceeding a geometrical one. Perhaps, as Malthus originally predicted, population size will swamp our ability to produce food, and we will see mass starvation and disease. Or maybe we will instead fall prey to some other rapidly growing change, perhaps drowning under layers of plastic and pollution, or watching the planet melt beneath us under the influence of unsustainable growth. One way or another, however, we must address the fact that we are caught between two curves, and the space between them is rapidly diminishing.

Follow-up note on August 2: After writing this post last week I came upon an interesting article. Apparently there is quantitative evidence to suggest that the rate of evolutionary change in modern humans has in fact increased over the last 40,000 years (roughly the period of the upper Paleolithic, as identified above). In this case 'evolutionary change' implies positive selection, as measured through a variety of metrics from statistical genetics. So maybe my argument above about the increasing rate of technological change is not merely an abstract metaphor, but instead has some connection to our physical being as well.