Science Archives - Farnam Street https://myvibez.link/category/science/ Mastering the best of what other people have already figured out Thu, 04 Jul 2024 15:13:37 +0000 en-US hourly 1 https://myvibez.link/wp-content/uploads/2015/06/cropped-farnamstreet-80x80.png Science Archives - Farnam Street https://myvibez.link/category/science/ 32 32 148761140 How To Spot Bad Science https://myvibez.link/spot-bad-science/ Sun, 31 Jul 2022 11:00:00 +0000 https://myvibez.link/?p=40528 In a digital world that clamors for clicks, news is sensationalized and “facts” change all the time. Here’s how to discern what is trustworthy and what is hogwash. *** Unless you’ve studied it, most of us are never taught how to evaluate science or how to parse the good from the bad. Yet it is …

The post How To Spot Bad Science appeared first on Farnam Street.

]]>
In a digital world that clamors for clicks, news is sensationalized and “facts” change all the time. Here’s how to discern what is trustworthy and what is hogwash.

***

Unless you’ve studied it, most of us are never taught how to evaluate science or how to parse the good from the bad. Yet it is something that dictates every area of our lives. It is vital for helping us understand how the world works. It might be too much effort and time to appraise research for yourself, however. Often, it can be enough to consult an expert or read a trustworthy source.

But some decisions require us to understand the underlying science. There is no way around it. Many of us hear about scientific developments from news articles and blog posts. Some sources put the work into presenting useful information. Others manipulate or misinterpret results to get more clicks. So we need the thinking tools necessary to know what to listen to and what to ignore. When it comes to important decisions, like knowing what individual action to take to minimize your contribution to climate change or whether to believe the friend who cautions against vaccinating your kids, being able to assess the evidence is vital.

Much of the growing (and concerning) mistrust of scientific authority is based on a misunderstanding of how it works and a lack of awareness of how to evaluate its quality. Science is not some big immovable mass. It is not infallible. It does not pretend to be able to explain everything or to know everything. Furthermore, there is no such thing as “alternative” science. Science does involve mistakes. But we have yet to find a system of inquiry capable of achieving what it does: move us closer and closer to truths that improve our lives and understanding of the universe.

“Rather than love, than money, than fame, give me truth.”

— Henry David Thoreau

There is a difference between bad science and pseudoscience. Bad science is a flawed version of good science, with the potential for improvement. It follows the scientific method, only with errors or biases. Often, it’s produced with the best of intentions, just by researchers who are responding to skewed incentives.

Pseudoscience has no basis in the scientific method. It does not attempt to follow standard procedures for gathering evidence. The claims involved may be impossible to disprove. Pseudoscience focuses on finding evidence to confirm it, disregarding disconfirmation. Practitioners invent narratives to preemptively ignore any actual science contradicting their views. It may adopt the appearance of actual science to look more persuasive.

While the tools and pointers in this post are geared towards identifying bad science, they will also help with easily spotting pseudoscience.

Good science is science that adheres to the scientific method, a systematic method of inquiry involving making a hypothesis based on existing knowledge, gathering evidence to test if it is correct, then either disproving or building support for the hypothesis. It takes many repetitions of applying this method to build reasonable support for a hypothesis.

In order for a hypothesis to count as such, there must be evidence that, if collected, would disprove it.

In this post, we’ll talk you through two examples of bad science to point out some of the common red flags. Then we’ll look at some of the hallmarks of good science you can use to sort the signal from the noise. We’ll focus on the type of research you’re likely to encounter on a regular basis, including medicine and psychology, rather than areas less likely to be relevant to your everyday life.

[Note: we will use the terms “research” and “science” and “researcher” and “scientist” interchangeably here.]

Power Posing

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” ―Isaac Asimov

First, here’s an example of flawed science from psychology: power posing. A 2010 study by Dana Carney, Andy J. Yap, and Amy Cuddy entitledPower Posing: Brief Nonverbal Displays Effects Neuroendocrine Levels and Risk Tolerance” claimed “open, expansive” poses caused participants to experience elevated testosterone levels, reduced cortisol levels, and greater risk tolerance. These are all excellent things in a high-pressure situation, like a job interview. The abstract concluded that “a person can, via a simple two-minute pose, embody power and instantly become more powerful.” The idea took off. It spawned hundreds of articles, videos, and tweets espousing the benefits of including a two-minute power pose in your day.

Yet at least eleven follow up studies, many led by Joseph Cesario of Michigan State University including “’Power Poses’ Don’t Work, Eleven New Studies Suggest,” failed to replicate the results. None found that power posing has a measurable impact on people’s performance in tasks or on their physiology. While subjects did report a subjective feeling of increased powerfulness, their performance did not differ from subjects who did not strike a power pose.

One of the researchers of the original study, Carney, has since changed her mind about the effect. Carney stated she no longer believe the results of the original study. Unfortunately, this isn’t always how researchers respond when confronted with evidence discrediting their prior work. We all know how uncomfortable changing our minds is.

The notion of power posing is exactly the kind of nugget that spreads fast online. It’s simple, free, promises dramatic benefits with minimal effort, and is intuitive. We all know posture is important. It has a catchy, memorable name. Yet examining the details of the original study reveals a whole parade of red flags. The study had 42 participants. That might be reasonable for preliminary or pilot studies. But is in no way sufficient to “prove” anything. It was not blinded. Feedback from participants was self-reported, which is notorious for being biased and inaccurate.

There is also a clear correlation/causation issue. Powerful, dominant animals tend to use expansive body language that exaggerates their size. Humans often do the same. But that doesn’t mean it’s the pose making them powerful. Being powerful could make them pose that way.

A TED Talk in which Amy Cuddy, the study’s co-author, claimed power posing could “significantly change the way your life unfolds” is one of the most popular to date, with tens of millions of views. The presentation of the science in the talk is also suspect. Cuddy makes strong claims with a single, small study as justification. She portrays power posing as a panacea. Likewise, the original study’s claim that a power pose makes someone “instantly become more powerful” is suspiciously strong.

This is one of the examples of psychological studies related to small tweaks in our behavior that have not stood up to scrutiny. We’re not singling out the power pose study as being unusually flawed or in any way fraudulent. The researchers had clear good intentions and a sincere belief in their work. It’s a strong example of why we should go straight to the source if we want to understand research. Coverage elsewhere is unlikely to even mention methodological details or acknowledge any shortcomings. It would ruin the story. We even covered power posing on Farnam Street in 2016—we’re all susceptible to taking these ‘scientific’ results seriously, without checking on the validity of the underlying science.

It is a good idea to be skeptical of research promising anything too dramatic or extreme with minimal effort, especially without substantial evidence. If it seems too good to be true, it most likely is.

Green Coffee Beans

“An expert is a person who has made all the mistakes that can be made in a very narrow field.” ―Niels Bohr

The world of weight-loss science is one where bad science is rampant. We all know, deep down, that we cannot circumnavigate the need for healthy eating and exercise. Yet the search for a magic bullet, offering results without effort or risks, continues. Let’s take a look at one study that is a masterclass in bad science.

EntitledRandomized, Double-Blind, Placebo-Controlled, Linear Dose, Crossover Study to Evaluate the Efficacy and Safety of a Green Coffee Bean Extract in Overweight Subjects,” it was published in 2012 in the journal Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy. On the face of it, and to the untrained eye, the study may appear legitimate, but it is rife with serious problems, as Scott Gavura explained in the article “Dr. Oz and Green Coffee Beans – More Weight Loss Pseudoscience” in the publication Science-Based Medicine. The original paper was later retracted by its authors. The Federal Trade Commission (FTC) ordered the supplement manufacturer who funded the study to pay a $3.5 million fine for using it in their marketing materials, describing it as “botched.”

The Food and Drug Administration (FDA) recommends studies relating to weight-loss consist of at least 3,000 participants receiving the active medication and at least 1,500 receiving a placebo, all for a minimum period of 12 months. This study used a mere 16 subjects, with no clear selection criteria or explanation. None of the researchers involved had medical experience or had published related research. They did not disclose the conflict of interest inherent in the funding source. It didn’t cover efforts to avoid any confounding factors. It is vague about whether subjects changed their diet and exercise, showing inconsistencies. The study was not double-blinded, despite claiming to be. It has not been replicated.

The FTC reported that the study’s lead investigator “repeatedly altered the weights and other key measurements of the subjects, changed the length of the trial, and misstated which subjects were taking the placebo or GCA during the trial.” A meta-analysis by Rachel Buchanan and Robert D. Beckett, “Green Coffee for Pharmacological Weight Loss” published in the Journal of Evidence-Based Complementary & Alternative Medicine, failed to find evidence for green coffee beans being safe or effective; all the available studies had serious methodological flaws, and most did not comply with FDA guidelines.

Signs of Good Science

“That which can be asserted without evidence can be dismissed without evidence.” ―Christopher Hitchens

We’ve inverted the problem and considered some of the signs of bad science. Now let’s look at some of the indicators a study is likely to be trustworthy. Unfortunately, there is no single sign a piece of research is good science. None of the signs mentioned here are, alone, in any way conclusive. There are caveats and exceptions to all. These are simply factors to evaluate.

It’s Published by a Reputable Journal

“The discovery of instances which confirm a theory means very little if we have not tried, and failed, to discover refutations.” —Karl Popper

A journal, any journal, publishing a study says little about its quality. Some will publish any research they receive in return for a fee. A few so-called “vanity publishers” claim to have a peer-review process, yet they typically have a short gap between receiving a paper and publishing it. We’re talking days or weeks, not the expected months or years. Many predatory publishers do not even make any attempt to verify quality.

No journal is perfect. Even the most respected journals make mistakes and publish low-quality work sometimes. However, anything that is not published research or based on published research in a journal is not worth consideration. Not as science. A blog post saying green smoothies cured someone’s eczema is not comparable to a published study. The barrier is too low. If someone cared enough about using a hypothesis or “finding” to improve the world and educate others, they would make the effort to get it published. The system may be imperfect, but reputable researchers will generally make the effort to play within it to get their work noticed and respected.

It’s Peer Reviewed

Peer review is a standard process in academic publishing. It’s intended as an objective means of assessing the quality and accuracy of new research. Uninvolved researchers with relevant experience evaluate papers before publication. They consider factors like how well it builds upon pre-existing research or if the results are statistically significant. Peer review should be double-blinded. This means the researcher doesn’t know who is reviewing their work and the reviewer doesn’t know who the researcher is.

Publishers only perform a cursory “desk check” before moving onto peer review. This is to check for major errors, nothing more. They cannot have the expertise necessary to vet the quality of every paper they handle—hence the need for external experts. The number of reviewers and strictness of the process depends on the journal. Reviewers either declare a paper unpublishable or suggest improvements. It is rare for them to suggest publishing without modifications.

Sometimes several rounds of modifications prove necessary. It can take years for a paper to see the light of day, which is no doubt frustrating for the researcher. But it ensures no or fewer mistakes or weak areas.

Pseudoscientific practitioners will often claim they cannot get their work published because peer reviewers suppress anything contradicting prevailing doctrines. Good researchers know having their work challenged and argued against is positive. It makes them stronger. They don’t shy away from it.

Peer review is not a perfect system. Seeing as it involves humans, there is always room for bias and manipulation. In a small field, it may be easy for a reviewer to get past the double-blinding. However, as it stands, peer review seems to be the best available system. In isolation, it’s not a guarantee that research is perfect, but it’s one factor to consider.

The Researchers Have Relevant Experience and Qualifications

One of the red flags in the green coffee bean study was that the researchers involved had no medical background or experience publishing obesity-related research.

While outsiders can sometimes make important advances, researchers should have relevant qualifications and a history of working in that field. It is too difficult to make scientific advancements without the necessary background knowledge and expertise. If someone cares enough about advancing a given field, they will study it. If it’s important, verify their backgrounds.

It’s Part of a Larger Body of Work

“Science, my lad, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.” ―Jules Verne

We all like to stand behind the maverick. But we should be cautious of doing so when it comes to evaluating the quality of science. On the whole, science does not progress in great leaps. It moves along millimeter by millimeter, gaining evidence in increments. Even if a piece of research is presented as groundbreaking, it has years of work behind it.

Researchers do not work in isolation. Good science is rarely, if ever, the result of one person or even one organization. It comes from a monumental collective effort. So when evaluating research, it is important to see if other studies point to similar results and if it is an established field of work. For this reason, meta-analyses, which analyze the combined results of many studies on the same topic, are often far more useful to the public than individual studies. Scientists are humans and they all make mistakes. Looking at a collective body of work helps smooth out any problems. Individual studies are valuable in that they further the field as a whole, allowing for the creation of meta-studies.

Science is about evidence, not reputation. Sometimes well-respected researchers, for whatever reason, produce bad science. Sometimes outsiders produce amazing science. What matters is the evidence they have to support it. While an established researcher may have an easier time getting support for their work, the overall community accepts work on merit. When we look to examples of unknowns who made extraordinary discoveries out of the blue, they always had extraordinary evidence for it.

Questioning the existing body of research is not inherently bad science or pseudoscience. Doing so without a remarkable amount of evidence is.

It Doesn’t Promise a Panacea or Miraculous Cure

Studies that promise anything a bit too amazing can be suspect. This is more common in media reporting of science or in research used for advertising.

In medicine, a panacea is something that can supposedly solve all, or many, health problems. These claims are rarely substantiated by anything even resembling evidence. The more outlandish the claim, the less likely it is to be true. Occam’s razor teaches us that the simplest explanation with the fewest inherent assumptions is most likely to be true. This is a useful heuristic for evaluating potential magic bullets.

It Avoids or at Least Discloses Potential Conflicts of Interest

A conflict of interest is anything that incentivizes producing a particular result. It distorts the pursuit of truth. A government study into the health risks of recreational drug use will be biased towards finding evidence of negative risks. A study of the benefits of breakfast cereal funded by a cereal company will be biased towards finding plenty of benefits. Researchers do have to get funding from somewhere, so this does not automatically make a study bad science. But research without conflicts of interest is more likely to be good science.

High-quality journals require researchers to disclose any potential conflicts of interest. But not all journals do. Media coverage of research may not mention this (another reason to go straight to the source). And people do sometimes lie. We don’t always know how unconscious biases influence us.

It Doesn’t Claim to Prove Anything Based on a Single Study

In the vast majority of cases, a single study is a starting point, not proof of anything. The results could be random chance, or the result of bias, or even outright fraud. Only once other researchers replicate the results can we consider a study persuasive. The more replications, the more reliable the results are. If attempts at replication fail, this can be a sign the original research was biased or incorrect.

A note on anecdotes: they’re not science. Anecdotes, especially from people close to us or those who have a lot of letters behind their name, have a disproportionate clout. But hearing something from one person, no matter how persuasive, should not be enough to discredit published research.

Science is about evidence, not proof. And evidence can always be discredited.

It Uses a Reasonable, Representative Sample Size

A representative sample represents the wider population, not one segment of it. If it does not, then the results may only be relevant for people in that demographic, not everyone. Bad science will often also use very small sample sizes.

There is no set target for what makes a large enough sample size; it all depends on the nature of the research. In general, the larger, the better. The exception is in studies that may put subjects at risk, which use the smallest possible sample to achieve usable results.

In areas like nutrition and medicine, it’s also important for a study to last a long time. A study looking at the impact of a supplement on blood pressure over a week is far less useful than one over a decade. Long-term data smooths out fluctuations and offers a more comprehensive picture.

The Results Are Statistically Significant

Statistical significance refers to the likelihood, measured in a percentage, that the results of a study were not due to pure random chance. The threshold for statistical significance varies between fields. Check if the confidence interval is in the accepted range. If it’s not, it’s not worth paying attention to.

It Is Well Presented and Formatted

“When my information changes, I alter my conclusions. What do you do, sir?” ―John Maynard Keynes

As basic as it sounds, we can expect good science to be well presented and carefully formatted, without prominent typos or sloppy graphics.

It’s not that bad presentation makes something bad science. It’s more the case that researchers producing good science have an incentive to make it look good. As Michael J. I. Brown of Monash University explains in How to Quickly Spot Dodgy Science, this is far more than a matter of aesthetics. The way a paper looks can be a useful heuristic for assessing its quality. Researchers who are dedicated to producing good science can spend years on a study, fretting over its results and investing in gaining support from the scientific community. This means they are less likely to present work looking bad. Brown gives an example of looking at an astrophysics paper and seeing blurry graphs and misplaced image captions—then finding more serious methodological issues upon closer examination. In addition to other factors, sloppy formatting can sometimes be a red flag. At the minimum, a thorough peer-review process should eliminate glaring errors.

It Uses Control Groups and Double-Blinding

A control group serves as a point of comparison in a study. The control group should be people as similar as possible to the experimental group, except they’re not subject to whatever is being tested. The control group may also receive a placebo to see how the outcome compares.

Blinding refers to the practice of obscuring which group participants are in. For a single-blind experiment, the participants do not know if they are in the control or the experimental group. In a double-blind experiment, neither the participants nor the researchers know. This is the gold standard and is essential for trustworthy results in many types of research. If people know which group they are in, the results are not trustworthy. If researchers know, they may (unintentionally or not) nudge participants towards the outcomes they want or expect. So a double-blind study with a control group is far more likely to be good science than one without.

It Doesn’t Confuse Correlation and Causation

In the simplest terms, two things are correlated if they happen at the same time. Causation is when one thing causes another thing to happen. For example, one large-scale study entitled “Are Non-Smokers Smarter than Smokers?” found that people who smoke tobacco tend to have lower IQs than those who don’t. Does this mean smoking lowers your IQ? It might, but there is also a strong link between socio-economic status and smoking. People of low income are, on average, likely to have lower IQ than those with higher incomes due to factors like worse nutrition, less access to education, and sleep deprivation. A study by the Centers for Disease Control and Prevention entitled “Cigarette Smoking and Tobacco Use Among People of Low Socioeconomic Status,” people of low socio-economic status are also more likely to smoke and to do so from a young age. There might be a correlation between smoking and IQ, but that doesn’t mean causation.

Disentangling correlation and causation can be difficult, but good science will take this into account and may detail potential confounding factors of efforts made to avoid them.

Conclusion

“The scientist is not a person who gives the right answers, he’s one who asks the right questions.” ―Claude Lévi-Strauss

The points raised in this article are all aimed at the linchpin of the scientific method—we cannot necessarily prove anything; we must consider the most likely outcome given the information we have. Bad science is generated by those who are willfully ignorant or are so focused on trying to “prove” their hypotheses that they fudge results and cherry-pick to shape their data to their biases. The problem with this approach is that it transforms what could be empirical and scientific into something subjective and ideological.

When we look to disprove what we know, we are able to approach the world with a more flexible way of thinking. If we are unable to defend what we know with reproducible evidence, we may need to reconsider our ideas and adjust our worldviews accordingly. Only then can we properly learn and begin to make forward steps. Through this lens, bad science and pseudoscience are simply the intellectual equivalent of treading water, or even sinking.

Article Summary

  • Most of us are never taught how to evaluate science or how to parse the good from the bad. Yet it is something that dictates every area of our lives.
  • Bad science is a flawed version of good science, with the potential for improvement. It follows the scientific method, only with errors or biases.
  • Pseudoscience has no basis in the scientific method. It does not attempt to follow standard procedures for gathering evidence. The claims involved may be impossible to disprove.
  • Good science is science that adheres to the scientific method, a systematic method of inquiry involving making a hypothesis based on existing knowledge, gathering evidence to test if it is correct, then either disproving or building support for the hypothesis.
  • Science is about evidence, not proof. And evidence can always be discredited.
  • In science, if it seems too good to be true, it most likely is.

Signs of good science include:

  • It’s Published by a Reputable Journal
  • It’s Peer Reviewed
  • The Researchers Have Relevant Experience and Qualifications
  • It’s Part of a Larger Body of Work
  • It Doesn’t Promise a Panacea or Miraculous Cure
  • It Avoids or at Least Discloses Potential Conflicts of Interest
  • It Doesn’t Claim to Prove Anything Based on a Single Study
  • It Uses a Reasonable, Representative Sample Size
  • The Results Are Statistically Significant
  • It Is Well Presented and Formatted
  • It Uses Control Groups and Double-Blinding
  • It Doesn’t Confuse Correlation and Causation

The post How To Spot Bad Science appeared first on Farnam Street.

]]>
40528
Advice for Young Scientists—and Curious People in General https://myvibez.link/advice-for-young-scientists/ Mon, 17 May 2021 12:58:06 +0000 https://myvibez.link/?p=44109 The Nobel Prize-winning biologist Peter Medawar (1915–1987) is best known for work that made the first organ transplants and skin grafts possible. Medawar was also a lively, witty writer who penned numerous books on science and philosophy. In 1979, he published Advice to a Young Scientist, a book brimming with both practical advice and philosophical guidance …

The post Advice for Young Scientists—and Curious People in General appeared first on Farnam Street.

]]>
The Nobel Prize-winning biologist Peter Medawar (1915–1987) is best known for work that made the first organ transplants and skin grafts possible. Medawar was also a lively, witty writer who penned numerous books on science and philosophy.

In 1979, he published Advice to a Young Scientist, a book brimming with both practical advice and philosophical guidance for anyone “engaged in exploratory activities.” Here, we summarize some of Medawar’s key insights from the book.

***

Application, diligence, a sense of purpose

“There is no certain way of telling in advance if the daydreams of a life dedicated to the pursuit of truth will carry a novice through the frustration of seeing experiments fail and of making the dismaying discovery that some of one’s favourite ideas are groundless.”

If you want to make progress in any area, you need to be willing to give up your best ideas from time to time. Science proceeds because researchers do all they can to disprove their hypotheses rather than prove them right. Medawar notes that he twice spent two whole years trying to corroborate groundless hypotheses. The key to being a good scientist is the capacity to take no for an answer—when necesssary. Additionally:

“…one does not need to be terrifically brainy to be a good scientist…there is nothing in experimental science that calls for great feats of ratiocination or a preternatural gift for deductive reasoning. Common sense one cannot do without, and one would be the better for owning some of those old-fashioned virtues which have fallen into disrepute. I mean application, diligence, a sense of purpose, the power to concentrate, to persevere and not be cast down by adversity—by finding out after long and weary inquiry, for example, that a dearly loved hypothesis is in large measure mistaken.”

The truth is, any measure of risk-taking comes with the possibility of failure. Learning from failure to continue exploring the unknown is a broadly useful mindset.

***

How to make important discoveries

“It can be said with marked confidence that any scientist of any age who wants to make important discoveries must study important problems. Dull or piffling problems yield dull or piffling answers.”

A common piece of advice for people early on in their careers is to pursue what they find most interesting. Medawar disagrees, explaining that “almost any problem is interesting if it is studied in sufficient depth.” He advises scientists to look for important problems, meaning ones with answers that matter to humankind.

When choosing an area of research, Medawar cautions against mistaking a fashion (“some new histochemical procedure or technical gimmick”) for a movement (“such as molecular genetics or cellular immunology”). Movements lead somewhere; fashions generally don’t.

***

Getting started

Whenever we begin some new endeavor, it can be tempting to think we need to know everything there is to know about it before we even begin. Often, this becomes a form of procrastination. Only once we try something and our plans make contact with reality can we know what we need to know. Medawar believes it’s unnecessary for scientists to spend an enormous amount of time learning techniques and supporting disciplines before beginning research:

“As there is no knowing in advance where a research enterprise may lead and what kind of skills it will require as it unfolds, this process of ‘equipping oneself’ has no predeterminable limits and is bad psychological policy….The great incentive to learning a new skill or supporting discipline is needing to use it.”

The best way to learn what we need to know is by getting started, then picking up new knowledge as it proves itself necessary. When there’s an urgent need, we learn faster and avoid unnecessary learning. The same can be true for too much reading:

“Too much book learning may crab and confine the imagination, and endless poring over the research of others is sometimes psychologically a research substitute, much as reading romantic fiction may be a substitute for real-life romance….The beginner must read, but intently and choosily and not too much.”

We don’t talk about this much at Farnam Street, but it is entirely possible to read too much. Reading becomes counterproductive when it serves as a substitute for doing the real thing, if that’s what someone is reading for. Medawar explains that it is “psychologically most important to get results, even if they are not original.” It’s important to build confidence by doing something concrete and seeing a visible manifestation of our labors. For Medawar, the best scientists begin with the understanding that they can never know anything and, besides, learning needs to be a lifelong process.

***

The secrets to effective collaboration

“Scientific collaboration is not at all like cooks elbowing each other from the pot of broth; nor is it like artists working on the same canvas, or engineers working out how to start a tunnel simultaneously from both sides of a mountain in such a way that the contractors do not miss each other in the middle and emerge independently at opposite ends.”

Instead, scientific collaboration is about researchers creating the right environment to develop and expand upon each other’s ideas. A good collaboration is greater than the sum of its parts and results in work that isn’t attributable to a single person.

For scientists who find their collaborators infuriating from time to time, Medawar advises being self-aware. We all have faults, and we too are probably almost intolerable to work with sometimes.

When collaboration becomes contentious, Medawar maintains that we should give away our best ideas.

Scientists sometimes face conflict over the matter of credit. If several researchers are working on the same problem, whichever one finds the solution (or a solution) first gets the credit, no matter how close the others were. This is a problem most creative fields don’t face: “The twenty years Wagner spent on composing the first three operas of The Ring were not clouded by the fear that someone else might nip ahead of him with Götterdämmerung.” Once a scientific idea becomes established, it becomes public property. So the only chance of ownership a researcher has comes by being the first.

However, Medawar advocates for being open about ideas and doing away with secrecy because “anyone who shuts his door keeps out more than he lets out.” He goes on to write, “The agreed house rule of the little group of close colleagues I have always worked with has always been ‘Tell everyone everything you know,’ and I don’t know anyone who came to any harm by falling in with it.

***

How to handle moral dilemmas

A scientist will normally have contractual obligations to his employer and has always a special and unconditionally binding obligation to the truth.

Medawar writes that many scientists, at some point in their career, find themselves grappling with the conflict between a contractual obligation and their own conscience. However, the “time to grapple is before a moral dilemma arises.” If we think an enterprise might lead somewhere damaging, we shouldn’t start on it in the first place.

We should know our values and aim to do work in accordance with them.

***

The first rule is never to fool yourself

“I cannot give any scientist of any age better advice than this: the intensity of the conviction that a hypothesis is true has no bearing of whether it is true or not.”

Richard Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.” All scientists make mistakes sometimes. Medawar advises, when this happens, to issue a swift correction. To do so is far more respectable and beneficial for the field than trying to cover it up. Echoing the previous advice to always be willing to take no for an answer, Medawar warns about falling in love with a hypothesis and believing it is true without evidence.

“A scientist who habitually deceives himself is well on the way toward deceiving others.”

***

The best creative environment

“To be creative, scientists need libraries and laboratories and the company of other scientists; certainly a quiet and untroubled life is a help. A scientist’s work is in no way deepened or made more cogent by privation, anxiety, distress, or emotional harassment. To be sure, the private lives of scientists may be strangely and comically mixed up, but not in ways that have any special bearing on the nature and quality of their work.”

Creativity rises from tranquility, not from disarray. Creativity is supported by a safe environment, one in which you can share and question openly and be heard with compassion and a desire to understand.

***

A final piece of advice:

“A scientist who wishes to keep his friends and not add to the number of his enemies must not be forever scoffing and criticizing and so earn a reputation for habitual disbelief; but he owes it to his profession not to acquiesce in or appear to condone folly, superstition, or demonstrably unsound belief. The recognition and castigation of folly will not win him friends, but it may gain him some respect.”

The post Advice for Young Scientists—and Curious People in General appeared first on Farnam Street.

]]>
44109
We Are What We Remember https://myvibez.link/we-remember/ Mon, 11 Jan 2021 14:00:05 +0000 https://myvibez.link/?p=43336 Memory is an intrinsic part of our life experience. It is critical for learning, and without memories we would have no sense of self. Understanding why some memories stick better than others, as well as accepting their fluidity, helps us reduce conflict and better appreciate just how much our memories impact our lives. *** “Which …

The post We Are What We Remember appeared first on Farnam Street.

]]>
Memory is an intrinsic part of our life experience. It is critical for learning, and without memories we would have no sense of self. Understanding why some memories stick better than others, as well as accepting their fluidity, helps us reduce conflict and better appreciate just how much our memories impact our lives.

***

“Which of our memories are true and which are not is something we may never know. It doesn’t change who we are.”

Memories can be so vivid. Let’s say you are spending time with your sibling and reflecting on your past when suddenly a memory pops up. Even though it’s about events that occurred twenty years ago, it seems like it happened yesterday. The sounds and smells pop into your mind. You remember what you were wearing, the color of the flowers on the table. You chuckle and share your memory with your sibling. But they stare at you and say, “That’s not how I remember it at all.” What?

Memory discrepancies happen all the time, but we have a hard time accepting that our memories are rarely accurate. Because we’ve been conditioned to think of our memories like video recordings or data stored in the cloud, we assert that our rememberings are the correct ones. Anyone who remembers the situation differently must be wrong.

Memories are never an exact representation of a moment in the past. They are not copied with perfect fidelity, and they change over time. Some of our memories may not even be ours, but rather something we saw in a film or a story someone else told to us. We mix and combine memories, especially older ones, all the time. It can be hard to accept the malleable nature of memories and the fact that they are not just sitting in our brains waiting to be retrieved. In Adventures in Memory, writer Hilde Østby and neuropsychologist Ylva Østby present a fascinating journey through all aspects of memory. Their stories and investigations provide great insight into how memory works; and how our capacity for memory is an integral part of the human condition, and how a better understanding of memory helps us avoid the conflicts we create when we insist that what we remember is right.

***

Memory and learning

“One thing that aging doesn’t diminish is the wisdom we have accumulated over a lifetime.”

Our memories, dynamic and changing though they may be, are with us for the duration of our lives. Unless you’ve experienced brain trauma, you learn new things and store at least some of what you learn in memory.

Memory is an obvious component of learning, but we don’t often think of it that way. When we learn something new, it’s against the backdrop of what we already know. All knowledge that we pick up over the years is stored in memory. The authors suggest that “how much you know in a broad sense determines what you understand of the new things you learn.” Because it’s easier to remember something if it can hook into context you already have, then the more you know, the more a new memory can attach to. Thus, what we already know, what we remember, impacts what we learn.

The Østbys explain that the strongest memory networks are created “when we learn something truly meaningful and make an effort to understand it.” They describe someone who is passionate about diving and thus “will more easily learn new things about diving than about something she’s never been interested in before.” Because the diver already knows a lot about diving, and because she loves it and is motivated to learn more, new knowledge about diving will easily attach itself to the memory network she already has about the subject.

While studying people who seem to have amazing memories, as measured by the sheer amount they can recall with accuracy, one of the conclusions the Østbys reach is “that many people who rely on their memories don’t use mnemonic techniques, nor do they cram. They’re just passionate about what they do.” The more meaningful the topics and the more we are invested in truly learning, the higher the chances are that we will convert new information into lasting memory. Also, the more we learn, the more we will remember. There doesn’t seem to be a limit on how much we can put into memory.

***

How we build our narratives

The experience of being a human is inseparable from our ability to remember. You can’t build relationships without memories. You can’t prepare for the future if you don’t remember the past.

The memories we hold on to early on have a huge impact on the ones we retain as we progress through life. “When memories enter our brain,” the Østbys explain, “they attach themselves to similar memories: ones from the same environment, or that involve the same feeling, the same music, or the same significant moment in history. Memories seldom swim around without connections.” Thus, a memory is significantly more likely to stick around if it can attach itself to something. A new experience that has very little in common with the narrative we’ve constructed of ourselves is harder to retain in memory.

As we get older, our new memories tend to reinforce what we already think of ourselves. “Memory is self-serving,” the Østbys write. “Memories are linked to what concerns you, what you feel, what you want.

Why is it so much easier to remember the details of a vacation or a fight we’ve had with our partner than the details of a physics lesson or the plot of a classic novel? “The fate of a memory is mostly determined by how much it means to us. Personal memories are important to us. They are tied to our hopes, our values, and our identities. Memories that contribute meaningfully to our personal autobiography prevail in our minds.” We need not beat ourselves up because we have a hard time remembering names or birthdays. Rather, we can accept that the triggers for the creation of a memory and its retention are related to how it speaks to the narrative we maintain about ourselves. This view of memory suggests that to better retain information, we can try to make knowing that information part of our identity. We don’t try to remember physics equations for the sake of it, but rather because in our personal narrative, we are someone who knows a lot about physics.

***

Memory, imagination, and fluidity

Our ability to imagine is based, in part, on our ability to remember. The connection works on two levels.

The first, the Østbys write, is that “our memories are the fuel for our imagination.” What we remember about the past informs a lot of what we can imagine about the future. Whether it’s snippets from movies we’ve seen or activities we’ve done, it’s our ability to remember the experiences we’ve had that provide the foundation for our imagination.

Second, there is a physical connection between memory and imagination. “The process that gives us vivid memories is the same as the one that we use to imagine the future.” We use the same parts of the brain when we immerse ourselves in an event from our past as we do when we create a vision for our future. Thus, one of the conclusions of Adventures in Memory is that “as far as our brains are concerned, the past and future are almost the same.” In terms of how they can feel to us, memories and the products of imagination are not that different.

The interplay between past and future, between memory and imagination, impacts the formation of memories themselves. Memory “is a living organism,” the Østbys explain, “always absorbing images, and when new elements are added, they are sewn into the original memory as seamlessly as only our imagination can do.”

One of the most important lessons from the book is to change up the analogies we use to understand memory. Memories are not like movies, exactly the same no matter how many times you watch them. Nor are they like files stored in a computer, unchanging data saved for when we might want to retrieve it. Memories, like the rest of our biology, are fluid.

Memory is more like live theater, where there are constantly new productions of the same pieces,” the Østbys write. “Each and every one of our memories is a mix of fact and fiction. In most memories the central story is based on true events, but it’s still reconstructed every time we recall it. In these reconstructions, we fill in the gaps with probable facts. We subconsciously pick up details from a sort-of memory prop room.

Understanding our memory more like a theater production, where the version you see in London’s West End isn’t going to be exactly the same as the one you see on Broadway, helps us let go of attaching a judgment of accuracy to what we remember. It’s okay to find out when reminiscing with friends that you have different memories of the same day. It’s also acceptable that two people will have different memories of the events leading to their divorce, or that business partners will have different memories of the terms they agreed to at the start of the partnership. The more you get used to the fluidity of your memories, the more the differences in recollections become sources of understanding instead of points of contention. What people communicate about what they remember can give you insight into their attitudes, beliefs, and values.

***

Conclusion

New memories build on the ones that are already there. The more we know, the easier it is to remember the new things we learn. But we have to be careful and recognize that our tendency is to reinforce the narrative we’ve already built. Brand new information is harder to retain, but sometimes we need to make the effort.

Finally, memories are important not only for learning and remembering but also because they form the basis of what we can imagine and create. In so many ways, we are what we remember. Accepting that our vivid memories can be very different from those who were in the same situation helps us reduce the conflict that comes with insisting that our memories must always be correct.

The post We Are What We Remember appeared first on Farnam Street.

]]>
43336
When Technology Takes Revenge https://myvibez.link/revenge-effects/ Mon, 14 Sep 2020 13:00:32 +0000 https://myvibez.link/?p=42759 While runaway cars and vengeful stitched-together humans may be the stuff of science fiction, technology really can take revenge on us. Seeing technology as part of a complex system can help us avoid costly unintended consequences. Here’s what you need to know about revenge effects. *** By many metrics, technology keeps making our lives better. …

The post When Technology Takes Revenge appeared first on Farnam Street.

]]>
While runaway cars and vengeful stitched-together humans may be the stuff of science fiction, technology really can take revenge on us. Seeing technology as part of a complex system can help us avoid costly unintended consequences. Here’s what you need to know about revenge effects.

***

By many metrics, technology keeps making our lives better. We live longer, healthier, richer lives with more options than ever before for things like education, travel, and entertainment. Yet there is often a sense that we have lost control of our technology in many ways, and thus we end up victims of its unanticipated impacts.

Edward Tenner argues in Why Things Bite Back: Technology and the Revenge of Unintended Consequences that we often have to deal with “revenge effects.” Tenner coined this term to describe the ways in which technologies can solve one problem while creating additional worse problems, new types of problems, or shifting the harm elsewhere. In short, they bite back.

Although Why Things Bite Back was written in the late 1990s and many of its specific examples and details are now dated, it remains an interesting lens for considering issues we face today. The revenge effects Tenner describes haunt us still. As the world becomes more complex and interconnected, it’s easy to see that the potential for unintended consequences will increase.

Thus, when we introduce a new piece of technology, it would be wise to consider whether we are interfering with a wider system. If that’s the case, we should consider what might happen further down the line. However, as Tenner makes clear, once the factors involved get complex enough, we cannot anticipate them with any accuracy.

Neither Luddite nor alarmist in nature, the notion of revenge effects can help us better understand the impact of intervening with complex systems But we need to be careful. Although second-order thinking is invaluable, it cannot predict the future with total accuracy. Understanding revenge effects is primarily a reminder of the value of caution and not of specific risks.

***

Types of revenge effects

There are four different types of revenge effects, described here as follows:

  1. Repeating effects: occur when more efficient processes end up forcing us to do the same things more often, meaning they don’t free up more of our time. Better household appliances have led to higher standards of cleanliness, meaning people end up spending the same amount of time—or more—on housework.
  2. Recomplicating effects: occur when processes become more and more complex as the technology behind them improves. Tenner gives the now-dated example of phone numbers becoming longer with the move away from rotary phones. A modern example might be lighting systems that need to be operated through an app, meaning a visitor cannot simply flip a switch.
  3. Regenerating effects: occur when attempts to solve a problem end up creating additional risks. Targeting pests with pesticides can make them increasingly resistant to harm or kill off their natural predators. Widespread use of antibiotics to control certain conditions has led to be resistant strains of bacteria that are harder to treat.
  4. Rearranging effects: occur when costs are transferred elsewhere so risks shift and worsen. Air conditioning units on subways cool down the trains—while releasing extra heat and making the platforms warmer. Vacuum cleaners can throw dust mite pellets into the air, where they remain suspended and are more easily breathed in. Shielding beaches from waves transfers the water’s force elsewhere.

***

Recognizing unintended consequences

The more we try to control our tools, the more they can retaliate.

Revenge effects occur when the technology for solving a problem ends up making it worse due to unintended consequences that are almost impossible to predict in advance. A smartphone might make it easier to work from home, but always being accessible means many people end up working more.

Things go wrong because technology does not exist in isolation. It interacts with complex systems, meaning any problems spread far from where they begin. We can never merely do one thing.

Tenner writes: “Revenge effects happen because new structures, devices, and organisms react with real people in real situations in ways we could not foresee.” He goes on to add that “complexity makes it impossible for anyone to understand how the system might act: tight coupling spreads problems once they begin.”

Prior to the Industrial Revolution, technology typically consisted of tools that served as an extension of the user. They were not, Tenner argues, prone to revenge effects because they did not function as parts in an overall system like modern technology. He writes that “a machine can’t appear to have a will of its own unless it is a system, not just a device. It needs parts that interact in unexpected and sometimes unstable and unwanted ways.”

Revenge effects often involve the transformation of defined, localized risks into nebulous, gradual ones involving the slow accumulation of harm. Compared to visible disasters, these are much harder to diagnose and deal with.

Large localized accidents, like a plane crash, tend to prompt the creation of greater safety standards, making us safer in the long run. Small cumulative ones don’t.

Cumulative problems, compared to localized ones, aren’t easy to measure or even necessarily be concerned about. Tenner points to the difference between reactions in the 1990s to the risk of nuclear disasters compared to global warming. While both are revenge effects, “the risk from thermonuclear weapons had an almost built-in maintenance compulsion. The deferred consequences of climate change did not.”

Many revenge effects are the result of efforts to improve safety. “Our control of the acute has indirectly promoted chronic problems”, Tenner writes. Both X-rays and smoke alarms cause a small number of cancers each year. Although they save many more lives and avoiding them is far riskier, we don’t get the benefits without a cost. The widespread removal of asbestos has reduced fire safety, and disrupting the material is often more harmful than leaving it in place.

***

Not all effects exact revenge

A revenge effect is not a side effect—defined as a cost that goes along with a benefit. The value of being able to sanitize a public water supply has significant positive health outcomes. It also has a side effect of necessitating an organizational structure that can manage and monitor that supply.

Rather, a revenge effect must actually reverse the benefit for at least a small subset of users. For example, the greater ease of typing on a laptop compared to a typewriter has led to an increase in carpal tunnel syndrome and similar health consequences. It turns out that the physical effort required to press typewriter keys and move the carriage protected workers from some of the harmful effects of long periods of time spent typing.

Likewise, a revenge effect is not just a tradeoff—a benefit we forgo in exchange for some other benefit. As Tenner writes:

If legally required safety features raise airline fares, that is a tradeoff. But suppose, say, requiring separate seats (with child restraints) for infants, and charging a child’s fare for them, would lead many families to drive rather than fly. More children could in principle die from transportation accidents than if the airlines had continued to permit parents to hold babies on their laps. This outcome would be a revenge effect.

***

In support of caution

In the conclusion of Why Things Bite Back, Tenner writes:

We seem to worry more than our ancestors, surrounded though they were by exploding steamboat boilers, raging epidemics, crashing trains, panicked crowds, and flaming theaters. Perhaps this is because the safer life imposes an ever increasing burden of attention. Not just in the dilemmas of medicine but in the management of natural hazards, in the control of organisms, in the running of offices, and even in the playing of games there are, not necessarily more severe, but more subtle and intractable problems to deal with.

While Tenner does not proffer explicit guidance for dealing with the phenomenon he describes, one main lesson we can draw from his analysis is that revenge effects are to be expected, even if they cannot be predicted. This is because “the real benefits usually are not the ones that we expected, and the real perils are not those we feared.”

Chains of cause and effect within complex systems are stranger than we can often imagine. We should expect the unexpected, rather than expecting particular effects.

While we cannot anticipate all consequences, we can prepare for their existence and factor it into our estimation of the benefits of new technology. Indeed, we should avoid becoming overconfident about our ability to see the future, even when we use second-order thinking. As much as we might prepare for a variety of impacts, revenge effects may be dependent on knowledge we don’t yet possess. We should expect larger revenge effects the more we intensify something (e.g., making cars faster means worse crashes).

Before we intervene in a system, assuming it can only improve things, we should be aware that our actions can do the opposite or do nothing at all. Our estimations of benefits are likely to be more realistic if we are skeptical at first.

If we bring more caution to our attempts to change the world, we are better able to avoid being bitten.

 

The post When Technology Takes Revenge appeared first on Farnam Street.

]]>
42759
The Observer Effect: Seeing Is Changing https://myvibez.link/observer-effect/ Mon, 17 Aug 2020 11:00:58 +0000 https://myvibez.link/?p=42692 The act of looking at something changes it – an effect that holds true for people, animals, even atoms. Here’s how the observer effect distorts our world and how we can get a more accurate picture. *** We often forget to factor in the distortion of observation when we evaluate someone’s behavior. We see what …

The post The Observer Effect: Seeing Is Changing appeared first on Farnam Street.

]]>
The act of looking at something changes it – an effect that holds true for people, animals, even atoms. Here’s how the observer effect distorts our world and how we can get a more accurate picture.

***

We often forget to factor in the distortion of observation when we evaluate someone’s behavior. We see what they are doing as representative of their whole life. But the truth is, we all change how we act when we expect to be seen. Are you ever on your best behavior when you’re alone in your house? To get better at understanding other people, we need to consider the observer effect: observing things changes them, and some phenomena only exist when observed.

The observer effect is not universal. The moon continues to orbit whether we have a telescope pointed at it or not. But both things and people can change under observation. So, before you judge someone’s behavior, it’s worth asking if they are changing because you are looking at them, or if their behavior is natural. People are invariably affected by observation. Being watched makes us act differently.

“I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers.”

— Isaac Asimov

The observer effect in science

The observer effect pops up in many scientific fields.

In physics, Erwin Schrödinger’s famous cat highlights the power of observation. In his best-known thought experiment, Schrödinger asked us to imagine a cat placed in a box with a radioactive atom that might or might not kill it in an hour. Until the box opens, the cat exists in a state of superposition (when half of two states each occur at the same time)—that is, the cat is both alive and dead. Only by observing it does the cat shift permanently to one of the two states. The observation removes the cat from a state of superposition and commits it to just one.

(Although Schrodinger meant this as a counter-argument to Einstein’s proposition of superposition of quantum states – he wanted to demonstrate the absurdity of the proposition – it has caught on in popular culture as a thought experiment of the observer effect.)

In biology, when researchers want to observe animals in their natural habitat, it is paramount that they find a way to do so without disturbing those animals. Otherwise, the behavior they see is unlikely to be natural, because most animals (including humans) change their behavior when they are being observed. For instance, Dr. Cristian Damsa and his colleagues concluded in their paper “Heisenberg in the ER” that being observed makes psychiatric patients a third less likely to require sedation. Doctors and nurses wash their hands more when they know their hygiene is being tracked. And other studies have shown that zoo animals only exhibit certain behaviors in the presence of visitors, such as being hypervigilant of their presence and repeatedly looking at them.

In general, we change our behavior when we expect to be seen. Philosopher Jeremy Bentham knew this when he designed the panopticon prison in the eighteenth century, building upon an idea by his brother Samuel. The prison was constructed so that its cells circled a central watchtower so inmates could never tell if they were being watched or not. Bentham expected this would lead to better behavior, without the need for many staff. It never caught on as an actual design for prisons, but the modern prevalence of CCTV is often compared to the Panopticon. We never know when we’re being watched, so we act as if it’s all the time.

The observer effect, however, is twofold. Observing changes what occurs, but observing also changes our perceptions of what occurs. Let’s take a look at that next.

“How much does one imagine, how much observe? One can no more separate those functions than divide light from air, or wetness from water.”

— Elspeth Huxley

Observer bias

The effects of observation get more complex when we consider how each of us filters what we see through our own biases, assumptions, preconceptions, and other distortions. There’s a reason, after all, why double-blinding (ensuring both tester and subject does not receive any information that may influence their behavior) is the gold-standard in research involving living things. Observer bias occurs when we alter what we see, either by only noticing what we expect or by behaving in ways that have influence on what occurs. Without intending to do so, researchers may encourage certain results, leading to changes in ultimate outcomes.

A researcher falling prey to the observer bias is more likely to make erroneous interpretations, leading to inaccurate results. For instance, in a trial for an anti-anxiety drug where researchers know which subjects receive a placebo and which receive actual drugs, they may report that the latter group seems calmer because that’s what they expect.

The truth is, we often see what we expect to see. Our biases lead us to factor in irrelevant information when evaluating the actions of others. We also bring our past into the present and let that color our perceptions as well—so, for example, if someone has really hurt you before, you are less likely to see anything good in what they do.

The actor-observer bias

Another factor in the observer effect, and one we all fall victim to, is our tendency to attribute the behavior of others to innate personality traits. Yet we tend to attribute our own behavior to external circumstances. This is known as the actor-observer bias.

For example, a student who gets a poor grade on a test claims they were tired that day or the wording on the test was unclear. Conversely, when that same student observes a peer who performed badly on a test on which they performed well, the student judges their peer as incompetent or ill-prepared. If someone is late to a meeting with a friend, they rush in apologizing for the bad traffic. But if the friend is late, they label them as inconsiderate. When we see a friend having an awesome time in a social media post, we assume their life is fun all of the time. When we post about ourselves having an awesome time, we see it as an anomaly in an otherwise non-awesome life.

We have different levels of knowledge about ourselves and others. Because observation focuses on what is displayed, not what preceded or motivated it, we see the full context for our own behavior but only the final outcome for other people. We need to take the time to learn the context of other’s lives before we pass judgment on their actions.

Conclusion

We can use the observer effect to our benefit. If we want to change a behavior, finding some way to ensure someone else observes it can be effective. For instance, going to the gym with a friend means they know if we don’t go, making it more likely that we stick with it. Tweeting about our progress on a project can help keep us accountable. Even installing software on our laptop that tracks how often we check social media can reduce our usage.

But if we want to get an accurate view of reality, it is important we consider how observing it may distort the results. The value of knowing about the observer effect in everyday life is that it can help us factor in the difference that observation makes. If we want to gain an accurate picture of the world, it pays to consider how we take that picture. For instance, you cannot assume that an employee’s behavior in a meeting translates to their work, or that the way your kids act at home is the same as in the playground. We all act differently when we know we are being watched.

The post The Observer Effect: Seeing Is Changing appeared first on Farnam Street.

]]>
42692
The Stormtrooper Problem https://myvibez.link/stormtrooper-problem/ Mon, 04 Mar 2019 12:00:20 +0000 https://myvibez.link/?p=37223 There is no better problem solver than evolution. Natural selection thrives on diversity. From DNA to ecosystems, advantageous traits spread when there’s a variety to choose from. While biology gets diversity through mutations, we can choose ours by the people we include. Diversity isn’t about appearances but rather different perspectives shaped by experience. Diversity of …

The post The Stormtrooper Problem appeared first on Farnam Street.

]]>
There is no better problem solver than evolution.

Natural selection thrives on diversity. From DNA to ecosystems, advantageous traits spread when there’s a variety to choose from. While biology gets diversity through mutations, we can choose ours by the people we include.

Diversity isn’t about appearances but rather different perspectives shaped by experience.

Diversity of thought shouldn’t scare us; it should excite us. It removes blind spots and gives us more tools to solve problems. Imagine a meteor hurtling towards Earth. We’d want thousands of diverse minds working on it, not three.

Diversity of Thought

Intelligence agencies are a great petri dish for diversity because they require creative solutions to unique problems. You’d expect them to crave diversity and seek it out. You’d be wrong.

Their strict clearance process filters out the type of diversity that often results in excellent problem solvers, which inadvertently creates a monolithic workforce that thinks alike.

Consider some common problems. In debt? That might make you susceptible to blackmail. Divorced? That might make you an emotional decision-maker. Youthful indiscretion? That makes it harder to trust you. Gay, but haven’t told anyone? Blackmail risk. Independently wealthy? That means you don’t need our paycheck, which might make you unloyal. Do you have nuanced opinions on politics? Yikes. The list goes on.

While the security clearance process tries to reduce risk, it inadvertently creates the biggest one: the people who get through lack real diversity. Misfits are filtered out. Stormtroopers are hired.

As a result, effectiveness plummets and budgets increase. More Stormtroopers get hired. This is the Stormtrooper problem.

Eventually, hard problems get outsourced to the misfit-friendly companies that can solve them.

Nature’s Lesson

Birds didn’t evolve feathers for flight, but for warmth or attraction. This ‘exaptation’ – a trait repurposed for a new use – is crucial for survival in changing environments.

The Irish Potato Famine illustrates the danger of monoculture. Reliance on a single crop type led to widespread famine when blight struck. There is no adaptation without variation. If we’re all alike, we risk extinction when faced with the unexpected.

Even reproduction is about diversity. Each new life is a unique gene combination, like unexpected scientific breakthroughs.

Diversity makes us stronger, not weaker. Without it, we can’t adapt and survive.

Article Summary

  • Visible diversity is not the same as cognitive diversity.
  • Cognitive diversity comes from thinking about problems differently, not from race, gender, or sexual orientation.
  • Cognitive diversity helps us avoid blind spots and adapt to changing environments.
  • You can’t have selection without variation.
  • The Stormtrooper problem is when everyone working on a problem thinks about it similarly.

The post The Stormtrooper Problem appeared first on Farnam Street.

]]>
37223
Alexander von Humboldt and the Invention of Nature: Creating a Holistic View of the World Through A Web of Interdisciplinary Knowledge https://myvibez.link/alexander-von-humboldt-and-the-invention-of-nature/ Wed, 03 May 2017 11:00:03 +0000 https://www.farnamstreetblog.com/?p=31219 In his piece in 2014’s Edge collection This Idea Must Die: Scientific Theories That Are Blocking Progress, dinosaur paleontologist Scott Sampson writes that science needs to “subjectify” nature. By “subjectify”, he essentially means to see ourselves connected with nature, and therefore care about it the same way we do the people with whom we are connected. That’s not the …

The post Alexander von Humboldt and the Invention of Nature: Creating a Holistic View of the World Through A Web of Interdisciplinary Knowledge appeared first on Farnam Street.

]]>
In his piece in 2014’s Edge collection This Idea Must Die: Scientific Theories That Are Blocking Progress, dinosaur paleontologist Scott Sampson writes that science needs to “subjectify” nature. By “subjectify”, he essentially means to see ourselves connected with nature, and therefore care about it the same way we do the people with whom we are connected.

That’s not the current approach. He argues: “One of the most prevalent ideas in science is that nature consists of objects. Of course, the very practice of science is grounded in objectivity. We objectify nature so that we can measure it, test it, and study it, with the ultimate goal of unraveling its secrets. Doing so typically requires reducing natural phenomena to their component parts.”

But this approach is ultimately failing us.

Why? Because much of our unsustainable behavior can be traced to a broken relationship with nature, a perspective that treats the nonhuman world as a realm of mindless, unfeeling objects. Sustainability will almost certainly depend upon developing mutually enhancing relations between humans and nonhuman nature.

This isn’t a new plea, though. Over 200 years ago, the famous naturalist Alexander Von Humboldt (1769-1859) was facing the same challenges.

In her compelling book The Invention of Nature: Alexander Von Humboldt’s New World, Andrea Wulf explores Humboldt as the first person to publish works promoting a holistic view of nature, arguing that nature could only be understood in relation to the subjectivity of experiencing it.

Fascinated by scientific instruments, measurements and observations, he was driven by a sense of wonder as well. Of course nature had to be measured and analyzed, but he also believed that a great part of our response to the natural world should be based on the senses and emotions.

Humboldt was a rock star scientist who ignored conventional boundaries in his exploration of nature. Humboldt’s desire to know and understand the world led him to investigate discoveries in all scientific disciplines, and to see the interwoven patterns embedded in this knowledge — mental models anyone?

If nature was a web of life, he couldn’t look at it just as a botanist, a geologist or a zoologist. He required information about everything from everywhere.

Humboldt grew up in a world where science was dry, nature mechanical, and man an aloof and separate chronicler of what was before him. Not only did Humboldt have a new vision of what our understanding of nature could be, but he put humans in the middle of it.

Humboldt’s Essay on the Geography of Plants promoted an entirely different understanding of nature. Instead of only looking at an organism, … Humboldt now presented relationships between plants, climate and geography. Plants were grouped into zones and regions rather than taxonomic units. … He gave western science a new lens through which to view the natural world.

Revolutionary for his time, Humboldt rejected the Cartesian ideas of animals as mechanical objects. He also argued passionately against the growing approach in the sciences that put man atop and separate from the rest of the natural world. Promoting a concept of unity in nature, Humboldt saw nature as a “reflection of the whole … an organism in which the parts only worked in relation to each other.”

Furthermore, that “poetry was necessary to comprehend the mysteries of the natural world.”

Wulf paints one of Humboldt’s greatest achievements as his ability and desire to make science available to everyone. No one before him had “combined exact observation with a ‘painterly description of the landscape”.

By contrast, Humboldt took his readers into the crowded streets of Caracas, across the dusty plains of the Llanos and deep into the rainforest along the Orinoco. As he described a continent that few British had ever seen, Humboldt captured their imagination. His words were so evocative, the Edinburgh Review wrote, that ‘you partake in his dangers; you share his fears, his success and his disappointment.’

In a time when travel was precarious, expensive and unavailable to most people, Humboldt brought his experiences to anyone who could read or listen.

On 3 November 1827, … Humboldt began a series of sixty-one lectures at the university. These proved so popular that he added another sixteen at Berlin’s music hall from 6 December. For six months he delivered lectures several days a week. Hundreds of people attended each talk, which Humboldt presented without reading from his notes. It was lively, exhilarating and utterly new. By not charging any entry fee, Humboldt democratized science: his packed audiences ranged from the royal family to coachmen, from students to servants, from scholars to bricklayers – and half of those attending were women. Berlin had never seen anything like it.

The subjectification of nature is about seeing nature, experiencing it. Humboldt was a master of bringing people to worlds they couldn’t visit, allowing them to feel a part of it. In doing so, he wanted to force humanity to see itself in nature. If we were all part of the giant web, then we all had a responsibility to understand it.

When he listed the three ways in which the human species was affecting the climate, he named deforestation, ruthless irrigation and, perhaps most prophetically, the ‘great masses of steam and gas’ produced in the industrial centres. No one but Humboldt had looked at the relationship between humankind and nature like this before.

His final opus, a series of books called Cosmos, was the culmination of everything that Humboldt had learned and discovered.

Cosmos was unlike any previous book about nature. Humboldt took his readers on a journey from outer space to earth, and then from the surface of the planet into its inner core. He discussed comets, the Milky Way and the solar system as well as terrestrial magnetism, volcanoes and the snow line of mountains. He wrote about the migration of the human species, about plants and animals and the microscopic organisms that live in stagnant water or on the weathered surface of rocks. Where others insisted that nature was stripped of its magic as humankind penetrated into its deepest secrets, Humboldt believed exactly the opposite. How could this be, Humboldt asked, in a world in which the coloured rays of an aurora ‘unite in a quivering sea flame’, creating a sight so otherworldly ‘the splendour of which no description can reach’? Knowledge, he said, could never ‘kill the creative force of imagination’ – instead it brought excitement, astonishment and wondrousness.

This is the ultimate subjectivity of nature. Being inspired by its beauty to try and understand how it works. Humboldt had respect for nature, for the wonders it contained, but also as the system in which we ourselves are an inseparable part.

Wulf concludes at the end that Humboldt,

…was one of the last polymaths, and died at a time when scientific disciplines were hardening into tightly fenced and more specialized fields. Consequently his more holistic approach – a scientific method that included art, history, poetry and politics alongside hard data – has fallen out of favour.

Maybe this is where the subjectivity of nature has gone. But we can learn from Humboldt the value of bringing it back.

In a world where we tend to draw a sharp line between the sciences and the arts, between the subjective and the objective, Humboldt’s insight that we can only truly understand nature by using our imagination makes him a visionary.

A little imagination is all it takes.

The post Alexander von Humboldt and the Invention of Nature: Creating a Holistic View of the World Through A Web of Interdisciplinary Knowledge appeared first on Farnam Street.

]]>
31219
Warnings From Sleep: Nightmares and Protecting The Self https://myvibez.link/nightmares-and-protecting-the-self/ Thu, 13 Apr 2017 11:00:36 +0000 https://www.farnamstreetblog.com/?p=30419 “All of this is evidence that the mind, although asleep, is constantly concerned about the safety and integrity of the self.” *** Rosalind Cartwright — also known as the Queen of Dreams — is a leading sleep researcher. In The Twenty-four Hour Mind: The Role of Sleep and Dreaming in Our Emotional Lives, she explores the role …

The post Warnings From Sleep: Nightmares and Protecting The Self appeared first on Farnam Street.

]]>
“All of this is evidence that the mind, although asleep,
is constantly concerned about the safety and integrity of the self.”

***

Rosalind Cartwright — also known as the Queen of Dreams — is a leading sleep researcher. In The Twenty-four Hour Mind: The Role of Sleep and Dreaming in Our Emotional Lives, she explores the role of nightmares and how we use sleep to protect ourselves.

When our time awake is frightening or remains unpressed, the sleeping brain “may process horrible images with enough raw fear attached to awaken a sleeper with a horrendous nightmare.” The more trauma we have in our lives the more likely we are to experience anxiety and nightmares after a horrific event.

The common feature is a threat of harm, accompanied by a lack of ability to control the circumstances of the threat, and the lack of or inability to develop protective behaviors.

The strategies we use for coping effectively with extreme stress and fear are controversial. Should we deny the threatening event and avoid thinking about it better than thinking about it and becoming sensitized to it?

One clear principle that comes out of this work is that the effects of trauma on sleep and dreaming depend on the nature of the threat. If direct action against the threat is irrelevant or impossible (as it would be if the trauma was well in the past), then denial may be helpful in reducing stress so that the person can get on with living as best they can. However, if the threat will be encountered over and over (such as with spousal abuse), and direct action would be helpful in addressing the threat, then denial by avoiding thinking about the danger (which helps in the short-term) will undermine problem-solving efforts and mastery in the long run. In other words, if nothing can be done, emotion-coping efforts to regulate the distress (dreaming) is a good strategy; but if constructive actions can be taken, waking problem-solving action is more adaptive.

What about nightmares?

Nightmares are defined as frightening dreams that wake the sleeper into full consciousness and with a clear memory of the dream imagery. These are not to be confused with sleep terrors. There are three main differences between these two. First, nightmare arousals are more often from late in the night’s sleep, when dreams are longest and the content is most bizarre and affect-laden (emotional); sleep terrors occur early in sleep. Second, nightmares are REM sleep-related, while sleep terrors come out of non-REM (NREM) slow-wave sleep (SWS). Third, sleepers experience vivid recall of nightmares, whereas with sleep terrors the experience is of full or partial amnesia for the episode itself, and only rarely is a single image recalled.

Nightmares abort the REM sleep, a critical component of our always on brain, Cartwright explains:

If we are right that the mind is continuously active throughout sleep—reviewing emotion-evoking new experiences from the day, scanning memory networks for similar experiences (which will defuse immediate emotional impact), revising by updating our organized sense of ourselves, and rehearsing new coping behaviors—nightmares are an exception and fail to perform these functions.

The impact is to temporarily relieve the negative emotion. The example Cartwright gives is “I am not about to be eaten by a monster. I am safe in my own bed.” But because the nightmare has woken me up, the nightmare is of no help in regulating my emotions (a critical role of sleep). As we learn to manage negative emotions while we are awake, that is, as we grow up, nightmares reduce in frequency and we develop skills for resolving fears.

It’s not always fear that wakes us from a nightmare. We can also be woken by anger, disgust, and grief.

Cartwright concludes, with an interesting insight, on the role of sleep in consolidating and protecting “the self.”:

[N]ightmares appear to be more common in those who have intense reactions to stress. The criteria cited for nightmare disorder in the diagnostic manual for psychiatric disorders, the Diagnostic and Statistical Manual IV-TR (DSM IV-TR), include this phrase “frightening dreams usually involving threats to survival, security, or self-esteem.” This theme may sound familiar: Remember that threats to self-esteem seem to precede NREM parasomnia awakenings. All of this is evidence that the mind, although asleep, is constantly concerned about the safety and integrity of the self.

The Twenty-four Hour Mind goes on to explore the history of sleep research through case studies and synthesis.

The post Warnings From Sleep: Nightmares and Protecting The Self appeared first on Farnam Street.

]]>
30419
The Science of Sleep: Regulating Emotions and the 24 Hour Mind https://myvibez.link/twenty-four-hour-mind-rosalind-cartwright/ Wed, 29 Mar 2017 11:00:03 +0000 https://www.farnamstreetblog.com/?p=30417 Even if we often think of sleeping as ‘switching off’, it’s a complex state during which a lot of important things occur in our bodies. In particular, dreams are vital for helping our brains to process emotions and encode new learning. *** Rosalind Cartwright is one of the leading sleep researchers in the world. Her …

The post The Science of Sleep: Regulating Emotions and the 24 Hour Mind appeared first on Farnam Street.

]]>
Even if we often think of sleeping as ‘switching off’, it’s a complex state during which a lot of important things occur in our bodies. In particular, dreams are vital for helping our brains to process emotions and encode new learning.

***

“Memory is never a precise duplicate of the original; instead, it is a continuing act of creation..”

— Rosalind Cartwright

Rosalind Cartwright is one of the leading sleep researchers in the world. Her unofficial title is Queen of Dreams.

In The Twenty-four Hour Mind: The Role of Sleep and Dreaming in Our Emotional Lives, she looks back on the progress of sleep research and reminds us there is much left in the black box of sleep that we have yet to shine light on.

In the introduction she underscores the elusive nature of sleep:

The idea that sleep is good for us, beneficial to both mind and body, lies behind the classic advice from the busy physician: “Take two aspirins and call me in the morning.” But the meaning of this message is somewhat ambiguous. Will a night’s sleep plus the aspirin be of help no matter what ails us, or does the doctor himself need a night’s sleep before he is able to dispense more specific advice? In either case, the presumption is that there is some healing power in sleep for the patient or better insight into the diagnosis for the doctor, and that the overnight delay allows time for one or both of these natural processes to take place. Sometimes this happens, but unfortunately sometimes it does not. Sometimes it is sleep itself that is the problem.

Cartwright underscores that our brains like to run on “automatic pilot” mode, which is one of the reasons that getting better at things requires concentrated and focused effort. She explains:

We do not always use our highest mental abilities, but instead run on what we could call “automatic pilot”; once learned, many of our daily cognitive behaviors are directed by habit, those already-formed points of view, attitudes, and schemas that in part make us who we are. The formation of these habits frees us to use our highest mental processes for those special instances when a prepared response will not do, when circumstances change and attention must be paid, choices made or a new response developed. The result is that much of our baseline thoughts and behavior operate unconsciously.

Relating this back to dreams, and one of the more fascinating parts of Cartwright’s research, is the role sleep and dreams play in regulating emotions. She explains:

When emotions evoked by a waking experience are strong, or more often were under-attended at the time they occurred, they may not be fully resolved by nighttime. In other words, it may take us a while to come to terms with strong or neglected emotions. If, during the day, some event challenges a basic, habitual way in which we think about ourselves (such as the comment from a friend, “Aren’t you putting on weight?”) it may be a threat to our self-concepts. It will probably be brushed off at the time, but that question, along with its emotional baggage, will be carried forward in our minds into sleep. Nowadays, researchers do not stop our investigations at the border of sleep but continue to trace mental activity from the beginning of sleep on into dreaming. All day, the conscious mind goes about its work planning, remembering, and choosing, or just keeping the shop running as usual. On balance, we humans are more action oriented by day. We stay busy doing, but in the inaction of sleep we turn inward to review and evaluate the implications of our day, and the input of those new perceptions, learnings, and—most important—emotions about what we have experienced.

What we experience as a dream is the result of our brain’s effort to match recent, emotion-evoking events to other similar experiences already stored in long-term memory. One purpose of this sleep-related matching process, this putting of similar memory experiences together, is to defuse the impact of those feelings that might otherwise linger and disrupt our moods and behaviors the next day. The various ways in which this extraordinary mind of ours works—the top-level rational thinking and executive deciding functions, the middle management of routine habits of thought, and the emotional relating and updating of the organized schemas of our self-concept—are not isolated from each other. They interact. The emotional aspect, which is often not consciously recognized, drives the not-conscious mental activity of sleep.

Later in the book, she writes more about how dreams regulate emotions:

Despite differences in terminology, all the contemporary theories of dreaming have a common thread — they all emphasize that dreams are not about prosaic themes, not about reading, writing, and arithmetic, but about emotion, or what psychologists refer to as affect. What is carried forward from waking hours into sleep are recent experiences that have an emotional component, often those that were negative in tone but not noticed at the time or not fully resolved. One proposed purpose of dreaming, of what dreaming accomplishes (known as the mood regulatory function of dreams theory) is that dreaming modulates disturbances in emotion, regulating those that are troublesome. My research, as well as that of other investigators in this country and abroad, supports this theory. Studies show that negative mood is down-regulated overnight. How this is accomplished has had less attention.

I propose that when some disturbing waking experience is reactivated in sleep and carried forward into REM, where it is matched by similarity in feeling to earlier memories, a network of older associations is stimulated and is displayed as a sequence of compound images that we experience as dreams. This melding of new and old memory fragments modifies the network of emotional self-defining memories, and thus updates the organizational picture we hold of “who I am and what is good for me and what is not.” In this way, dreaming diffuses the emotional charge of the event and so prepares the sleeper to wake ready to see things in a more positive light, to make a fresh start. This does not always happen over a single night; sometimes a big reorganization of the emotional perspective of our self-concept must be made—from wife to widow or married to single, say, and this may take many nights. We must look for dream changes within the night and over time across nights to detect whether a productive change is under way. In very broad strokes, this is the definition of the mood-regulatory function of dreaming, one basic to the new model of the twenty-four hour mind I am proposing.

In another fascinating part of her research, Cartwright outlines the role of sleep in skill enhancement. In short, “sleeping on it” is wise advice.

Think back to “take two aspirins and call me in the morning.” Want to improve your golf stroke? Concentrate on it before sleeping. An interval of sleep has been proven to bestow a real benefit for both laboratory animals and humans when they are tested on many different types of newly learned tasks. You will remember more items or make fewer mistakes if you have had a period of sleep between learning something new and the test of your ability to recall it later than you would if you spent the same amount of time awake.

Most researchers agree “with the overall conclusion that one of the ways sleep works is by enhancing the memory of important bits of new information and clearing out unnecessary or competing bits, and then passing the good bits on to be integrated into existing memory circuits.” This happens in two steps.

The first is in early NREM sleep when the brain circuits that were active while we were learning something new, a motor skill, say, or a new language, are reactivated and stay active until REM sleep occurs. In REM sleep, these new bits of information are then matched to older related memories already stored in long-term memory networks. This causes the new learning to stick (to be consolidated) and to remain accessible for when we need it later in waking.

As for the effect of alcohol has before sleep, Carlyle Smith, a Canadian Psychologist, found that it reduces memory formation, “reducing the number of rapid eye movements” in REM sleep. The eye movements, similar to the ones we make while reading, are how we do scanning of visual information.

The mind is active 24 hours a day:

If the mind is truly working continuously, during all 24 hours of the day, it is not in its conscious mode during the time spent asleep. That time belongs to the unconscious. In waking, the two types of cognition, conscious and unconscious, are working sometimes in parallel, but also often interacting. They may alternate, depending on our focus of attention and the presence of an explicit goal. If we get bored or sleepy, we can slip into a third mode of thought, daydreaming. These thoughts can be recalled when we return to conscious thinking, which is not generally true of unconscious cognition unless we are caught in the act in the sleep lab. This third in-between state is variously called the preconscious or subconscious, and has been studied in a few investigations of what is going on in the mind during the transition before sleep onset.

Toward the end, Cartwright explores the role of sleep.

[I]n good sleepers, the mind is continuously active, reviewing experience from yesterday, sorting which new information is relevant and important to save due to its emotional saliency. Dreams are not without sense, nor are they best understood to be expressions of infantile wishes. They are the result of the interconnectedness of new experience with that already stored in memory networks. But memory is never a precise duplicate of the original; instead, it is a continuing act of creation. Dream images are the product of that creation. They are formed by pattern recognition between some current emotionally valued experience matching the condensed representation of similarly toned memories. Networks of these become our familiar style of thinking, which gives our behavior continuity and us a coherent sense of who we are. Thus, dream dimensions are elements of the schemas, and both represent accumulated experience and serve to filter and evaluate the new day’s input.

Sleep is a busy time, interweaving streams of thought with emotional values attached, as they fit or challenge the organizational structure that represents our identity. One function of all this action, I believe, is to regulate disturbing emotion in order to keep it from disrupting our sleep and subsequent waking functioning. In this book, I have offered some tests of that hypothesis by considering what happens to this process of down-regulation within the night when sleep is disordered in various ways.

Cartwright develops several themes throughout The Twenty-four Hour Mind. First is that the mind is continuously active. Second is the role of emotion in “carrying out the collaboration of the waking and sleeping mind.” This includes exploring whether the sleeping mind “contributes to resolving emotional turmoil stirred up by some real anxiety inducing circumstance.” Third is how sleeping contributes to how new learning is retained. Accumulated experiences serve to filter and evaluate the new day’s input.

The post The Science of Sleep: Regulating Emotions and the 24 Hour Mind appeared first on Farnam Street.

]]>
30417
Competition, Cooperation, and the Selfish Gene https://myvibez.link/richard-dawkins-selfish-gene/ Wed, 15 Mar 2017 11:00:59 +0000 https://www.farnamstreetblog.com/?p=30560 Richard Dawkins has one of the best-selling books of all time for a serious piece of scientific writing. Often labeled “pop science”, The Selfish Gene pulls together the “gene-centered” view of evolution: It is not really individuals being selected for in the competition for life, but their genes. The individual bodies (phenotypes) are simply carrying …

The post Competition, Cooperation, and the Selfish Gene appeared first on Farnam Street.

]]>
Richard Dawkins has one of the best-selling books of all time for a serious piece of scientific writing.

Often labeled “pop science”, The Selfish Gene pulls together the “gene-centered” view of evolution: It is not really individuals being selected for in the competition for life, but their genes. The individual bodies (phenotypes) are simply carrying out the instructions of the genes. This leads most people to a very “competition focused” view of life. But is that all?

***

More than 100 years before The Selfish Gene, Charles Darwin had famously outlined his Theory of Natural Selection in The Origin of Species.

We’re all hopefully familiar with this concept: Species evolve over long periods time through a process of heredity, variation, competition, and differential survival.

The mechanism of heredity was invisible to Darwin, but a series of scientists, not without a little argument, had figured it out by the 1970’s: Strands of the protein DNA (“genes”) encoded instructions for the building of physical structures. These genes were passed on to offspring in a particular way – the process of heredity. Advantageous genes were propagated in greater numbers. Disadvantageous genes, vice versa.

The Selfish Gene makes a particular kind of case: Specific gene variants grow in proportion to a gene pool by, on average, creating advantaged physical bodies and brains. The genes do their work through “phenotypes” – the physical representation of their information. As Helena Cronin would put in her book The Ant and the Peacock, “It is the net selective value of a gene’s phenotypic effect that determines the fate of the gene.”

This take of the evolutionary process became influential because of the range of hard-to-explain behavior that it illuminated.

Why do we see altruistic behavior? Because copies of genes are present throughout a population, not just in single individuals, and altruism can cause great advantages in those gene variants surviving and thriving. (In other words, genes that cause individuals to sacrifice themselves for other copies of those same genes will tend to thrive.)

Why do we see more altruistic behavior among family members? Because they are closely related, and share more genes!

Many problems seemed to be solved here, and the Selfish Gene model became one for all-time, worth having in your head.

However, buried in the logic of the gene-centered view of evolution is a statistical argument. Gene variants rapidly grow in proportion to the rest of the gene pool because they provide survival advantages in the average environment that the gene will experience over its existence. Thus, advantageous genes “selfishly” dominate their environment before long. It’s all about gene competition.

This has led many people, some biologists especially, to view evolution solely through the lens of competition. Unsurprisingly, this also led to some false paradigms about a strictly “dog eat dog” world where unrestricted and ruthless individual competition is deemed “natural”.

But what about cooperation?

***

The complex systems researcher Yaneer Bar-Yam argues that not only is the Selfish Gene a limiting concept biologically and possibly wrong mathematically (too complex to address here, but if you want to read about it, check out these pieces), but that there are more nuanced ways to understand the way competition and cooperation comfortably coexist. Not only that, but Bar-Yam argues that this has implications for optimal team formation.

In his book Making Things Work, Bar-Yam lays out a basic message: Even in the biological world, competition is a limited lens through which to see evolution. There’s always a counterbalance of cooperation.

Counter to the traditional perspective, the basic message of this and the following chapter is that competition and cooperation always coexist. People see them as opposing and incompatible forces. I think that this is a result of an outdated and one-sided understanding of evolution…This is extremely useful in describing nature and society; the basic insight that “what works, works” still holds. It turns out, however, that what works is a combination of competition and cooperation.

Bar-Yam uses the analogy of a sports team which exists in context of a sports league – let’s say the NBA. Through this lens we can see why players, teams, and leagues compete and cooperate. (The obvious analogy is that genes, individuals, and groups compete and cooperate in the biological world.)

In general, when we think about the conflict between cooperation and completion in team sports, we tend to think about the relationships between the players on a team. We care deeply about their willingness to cooperate and we distinguish cooperative “team players” from selfish non-team players, complaining about the latter even when their individual skill is formidable.

The reason we want players to cooperate is so that they can compete better as a team. Cooperation at the level of the individual enables effective competition at the level of the group, and conversely, the competition between teams motivates cooperation between players. There is a constructive relationship between cooperation and competition when they operate at different levels of organization.

The interplay between levels is a kind of evolutionary process where competition at the team level improves the cooperation between players. Just as in biological evolution, in organized team sports there is a process of selection of winners through competition of teams. Over time, the teams will change how they behave; the less successful teams will emulate strategies of teams that are doing well.

At every level then, there is an interplay between cooperation and competition. Players compete for playing time, and yet must be intensively cooperative on the court to compete with other teams. At the next level up, teams compete with each other for victories, and yet must cooperate intensively to sustain a league at all.

They create agreed upon rules, schedule times to play, negotiate television contracts, and so on. This allows the league itself to compete with other leagues for scarce attention from sports fans. And so on, up and down the ladder.

Competition among players, teams, and leagues is certainly a crucial dynamic. But it isn’t all that’s going on: They’re cooperating intensely at every level, because a group of selfish individuals loses to a group of cooperative ones.

And it is the same among biological species. Genes are competing with each other, as are individuals, tribes, and species. Yet at every level, they are also cooperating. The success of the human species is clearly due to its ability to cooperate in large numbers; and yet any student of war can attest to its deadly competitive nature. Similar dynamics are at play with ants, rats, and chimpanzees, among other species of insect and animal. It’s a yin and yang world.

Bar-Yam thinks this has great implications for how to build successful teams.

Teams will improve naturally – in any organization – when they are involved in a competition that is structured to select those teams that are better at cooperation. Winners of a competition become successful models of behavior for less successful teams, who emulate their success by learning their strategies and by selecting and trading team members.

For a business, a society, or any other complex system made up of many individuals, this means that improvement will come when the system’s structure involves a completion that rewards successful groups. The idea here is not a cutthroat competition of teams (or individuals) but a competition with rules that incorporate some cooperative activity with a mutual goal.

The dictum that “politics is the art of marshaling hatreds” would seem to reflect this notion: A non-violent way for competition of cooperative groups for dominance. As would the incentive systems of majorly successful corporations like Nucor and the best hospital systems, like the Mayo Clinic. Even modern business books are picking up on it.

Individual competition is important and drives excellence. Yet, as Bar-Yam points out, it’s ultimately not a complete formula. Having teams compete is more effective: You need to harness competition and cooperation at every level. You want groups pulling together, creating emerging effects where the whole is greater than the sum of the parts (a recurrent theme throughout nature).

You should read his book for more details on both this idea and the concept of complex systems in general. Bar-Yam also elaborated on his sports analogy in a white-paper here. If you’re interested in complex systems, check out this post on frozen accidents. Also, for more on creating better groups, check out how Steve Jobs did it.

The post Competition, Cooperation, and the Selfish Gene appeared first on Farnam Street.

]]>
30560
Scientific Concepts We All Ought To Know https://myvibez.link/scientific-concepts-know/ Mon, 13 Mar 2017 11:00:17 +0000 https://www.farnamstreetblog.com/?p=30451 John Brockman’s online scientific roundtable Edge.org does something fantastic every year: It asks all of its contributors (hundreds of them) to answer one meaningful question. Questions like What Have You Changed Your Mind About? and What is Your Dangerous Idea? This year’s was particularly awesome for our purposes: What Scientific Term or Concept Ought To Be More Known? The answers give us a window …

The post Scientific Concepts We All Ought To Know appeared first on Farnam Street.

]]>
John Brockman’s online scientific roundtable Edge.org does something fantastic every year: It asks all of its contributors (hundreds of them) to answer one meaningful question. Questions like What Have You Changed Your Mind About? and What is Your Dangerous Idea?

This year’s was particularly awesome for our purposesWhat Scientific Term or Concept Ought To Be More Known?

The answers give us a window into over 200 brilliant minds, with the simple filtering mechanism that there’s something they know that we should probably know, too. We wanted to highlight a few of our favorites for you.

***

From Steven Pinker, a very interesting thought on The Second Law of Thermodynamics (Entropy). This reminded me of the central thesis of The Origin of Wealth by Eric Beinhocker. (Which we’ll cover in more depth in the future: We referenced his work in the past.)


The Second Law of Thermodynamics states that in an isolated system (one that is not taking in energy), entropy never decreases. (The First Law is that energy is conserved; the Third, that a temperature of absolute zero is unreachable.) Closed systems inexorably become less structured, less organized, less able to accomplish interesting and useful outcomes, until they slide into an equilibrium of gray, tepid, homogeneous monotony and stay there.

In its original formulation the Second Law referred to the process in which usable energy in the form of a difference in temperature between two bodies is dissipated as heat flows from the warmer to the cooler body. Once it was appreciated that heat is not an invisible fluid but the motion of molecules, a more general, statistical version of the Second Law took shape. Now order could be characterized in terms of the set of all microscopically distinct states of a system: Of all these states, the ones that we find useful make up a tiny sliver of the possibilities, while the disorderly or useless states make up the vast majority. It follows that any perturbation of the system, whether it is a random jiggling of its parts or a whack from the outside, will, by the laws of probability, nudge the system toward disorder or uselessness. If you walk away from a sand castle, it won’t be there tomorrow, because as the wind, waves, seagulls, and small children push the grains of sand around, they’re more likely to arrange them into one of the vast number of configurations that don’t look like a castle than into the tiny few that do.

The Second Law of Thermodynamics is acknowledged in everyday life, in sayings such as “Ashes to ashes,” “Things fall apart,” “Rust never sleeps,” “Shit happens,” You can’t unscramble an egg,” “What can go wrong will go wrong,” and (from the Texas lawmaker Sam Rayburn), “Any jackass can kick down a barn, but it takes a carpenter to build one.”

Scientists appreciate that the Second Law is far more than an explanation for everyday nuisances; it is a foundation of our understanding of the universe and our place in it. In 1915 the physicist Arthur Eddington wrote:

[…]

Why the awe for the Second Law? The Second Law defines the ultimate purpose of life, mind, and human striving: to deploy energy and information to fight back the tide of entropy and carve out refuges of beneficial order. An underappreciation of the inherent tendency toward disorder, and a failure to appreciate the precious niches of order we carve out, are a major source of human folly.

To start with, the Second Law implies that misfortune may be no one’s fault. The biggest breakthrough of the scientific revolution was to nullify the intuition that the universe is saturated with purpose: that everything happens for a reason. In this primitive understanding, when bad things happen—accidents, disease, famine—someone or something must have wanted them to happen. This in turn impels people to find a defendant, demon, scapegoat, or witch to punish. Galileo and Newton replaced this cosmic morality play with a clockwork universe in which events are caused by conditions in the present, not goals for the future. The Second Law deepens that discovery: Not only does the universe not care about our desires, but in the natural course of events it will appear to thwart them, because there are so many more ways for things to go wrong than to go right. Houses burn down, ships sink, battles are lost for the want of a horseshoe nail.

Poverty, too, needs no explanation. In a world governed by entropy and evolution, it is the default state of humankind. Matter does not just arrange itself into shelter or clothing, and living things do everything they can not to become our food. What needs to be explained is wealth. Yet most discussions of poverty consist of arguments about whom to blame for it.

More generally, an underappreciation of the Second Law lures people into seeing every unsolved social problem as a sign that their country is being driven off a cliff. It’s in the very nature of the universe that life has problems. But it’s better to figure out how to solve them—to apply information and energy to expand our refuge of beneficial order—than to start a conflagration and hope for the best.

Richard Nisbett (a social psychologist) has a great one — a concept we’ve hit on before but is totally underappreciated by most people: The Fundamental Attribution Error.

Modern scientific psychology insists that explanation of the behavior of humans always requires reference to the situation the person is in. The failure to do so sufficiently is known as the Fundamental Attribution Error. In Milgram’s famous obedience experiment, two-thirds of his subjects proved willing to deliver a great deal of electric shock to a pleasant-faced middle-aged man, well beyond the point where he became silent after begging them to stop on account of his heart condition. When I teach about this experiment to undergraduates, I’m quite sure I‘ve never convinced a single one that their best friend might have delivered that amount of shock to the kindly gentleman, let alone that they themselves might have done so. They are protected by their armor of virtue from such wicked behavior. No amount of explanation about the power of the unique situation into which Milgram’s subject was placed is sufficient to convince them that their armor could have been breached.

My students, and everyone else in Western society, are confident that people behave honestly because they have the virtue of honesty, conscientiously because they have the virtue of conscientiousness. (In general, non-Westerners are less susceptible to the fundamental attribution error, lacking as they do sufficient knowledge of Aristotle!) People are believed to behave in an open and friendly way because they have the trait of extroversion, in an aggressive way because they have the trait of hostility. When they observe a single instance of honest or extroverted behavior they are confident that, in a different situation, the person would behave in a similarly honest or extroverted way.

In actual fact, when large numbers of people are observed in a wide range of situations, the correlation for trait-related behavior runs about .20 or less. People think the correlation is around .80. In reality, seeing Carlos behave more honestly than Bill in a given situation increases the likelihood that he will behave more honestly in another situation from the chance level of 50 percent to the vicinity of 55-57. People think that if Carlos behaves more honestly than Bill in one situation the likelihood that he will behave more honestly than Bill in another situation is 80 percent!

How could we be so hopelessly miscalibrated? There are many reasons, but one of the most important is that we don’t normally get trait-related information in a form that facilitates comparison and calculation. I observe Carlos in one situation when he might display honesty or the lack of it, and then not in another for perhaps a few weeks or months. I observe Bill in a different situation tapping honesty and then not another for many months.

This implies that if people received behavioral data in such a form that many people are observed over the same time course in a given fixed situation, our calibration might be better. And indeed it is. People are quite well calibrated for abilities of various kinds, especially sports. The likelihood that Bill will score more points than Carlos in one basketball game given that he did in another is about 67 percent—and people think it’s about 67 percent.

Our susceptibility to the fundamental attribution error—overestimating the role of traits and underestimating the importance of situations—has implications for everything from how to select employees to how to teach moral behavior.

Cesar Hidalgo, author of what looks like an awesome book, Why Information Grows, wrote about Criticality, which is a very important and central concept to understanding complex systems:

In physics we say a system is in a critical state when it is ripe for a phase transition. Consider water turning into ice, or a cloud that is pregnant with rain. Both of these are examples of physical systems in a critical state.

The dynamics of criticality, however, are not very intuitive. Consider the abruptness of freezing water. For an outside observer, there is no difference between cold water and water that is just about to freeze. This is because water that is just about to freeze is still liquid. Yet, microscopically, cold water and water that is about to freeze are not the same.

When close to freezing, water is populated by gazillions of tiny ice crystals, crystals that are so small that water remains liquid. But this is water in a critical state, a state in which any additional freezing will result in these crystals touching each other, generating the solid mesh we know as ice. Yet, the ice crystals that formed during the transition are infinitesimal. They are just the last straw. So, freezing cannot be considered the result of these last crystals. They only represent the instability needed to trigger the transition; the real cause of the transition is the criticality of the state.

But why should anyone outside statistical physics care about criticality?

The reason is that history is full of individual narratives that maybe should be interpreted in terms of critical phenomena.

Did Rosa Parks start the civil rights movement? Or was the movement already running in the minds of those who had been promised equality and were instead handed discrimination? Was the collapse of Lehman Brothers an essential trigger for the Great Recession? Or was the financial system so critical that any disturbance could have made the trick?

As humans, we love individual narratives. We evolved to learn from stories and communicate almost exclusively in terms of them. But as Richard Feynman said repeatedly: The imagination of nature is often larger than that of man. So, maybe our obsession with individual narratives is nothing but a reflection of our limited imagination. Going forward we need to remember that systems often make individuals irrelevant. Just like none of your cells can claim to control your body, society also works in systemic ways.

So, the next time the house of cards collapses, remember to focus on why we were building a house of cards in the first place, instead of focusing on whether the last card was the queen of diamonds or a two of clubs.

The psychologist Adam Alter has another good one on a concept we all naturally miss from time to time, due to the structure of our mind. The Law of Small Numbers.

In 1832, a Prussian military analyst named Carl von Clausewitz explained that “three quarters of the factors on which action in war is based are wrapped in a fog of . . . uncertainty.” The best military commanders seemed to see through this “fog of war,” predicting how their opponents would behave on the basis of limited information. Sometimes, though, even the wisest generals made mistakes, divining a signal through the fog when no such signal existed. Often, their mistake was endorsing the law of small numbers—too readily concluding that the patterns they saw in a small sample of information would also hold for a much larger sample.

Both the Allies and Axis powers fell prey to the law of small numbers during World War II. In June 1944, Germany flew several raids on London. War experts plotted the position of each bomb as it fell, and noticed one cluster near Regent’s Park, and another along the banks of the Thames. This clustering concerned them, because it implied that the German military had designed a new bomb that was more accurate than any existing bomb. In fact, the Luftwaffe was dropping bombs randomly, aiming generally at the heart of London but not at any particular location over others. What the experts had seen were clusters that occur naturally through random processes—misleading noise masquerading as a useful signal.

That same month, German commanders made a similar mistake. Anticipating the raid later known as D-Day, they assumed the Allies would attack—but they weren’t sure precisely when. Combing old military records, a weather expert named Karl Sonntag noticed that the Allies had never launched a major attack when there was even a small chance of bad weather. Late May and much of June were forecast to be cloudy and rainy, which “acted like a tranquilizer all along the chain of German command,” according to Irish journalist Cornelius Ryan. “The various headquarters were quite confident that there would be no attack in the immediate future. . . . In each case conditions had varied, but meteorologists had noted that the Allies had never attempted a landing unless the prospects of favorable weather were almost certain.” The German command was mistaken, and on Tuesday, June 6, the Allied forces launched a devastating attack amidst strong winds and rain.

The British and German forces erred because they had taken a small sample of data too seriously: The British forces had mistaken the natural clustering that comes from relatively small samples of random data for a useful signal, while the German forces had mistaken an illusory pattern from a limited set of data for evidence of an ongoing, stable military policy. To illustrate their error, imagine a fair coin tossed three times. You’ll have a one-in-four chance of turning up a string of three heads or tails, which, if you make too much of that small sample, might lead you to conclude that the coin is biased to reveal one particular outcome all or almost all of the time. If you continue to toss the fair coin, say, a thousand times, you’re far more likely to turn up a distribution that approaches five hundred heads and five hundred tails. As the sample grows, your chance of turning up an unbroken string shrinks rapidly (to roughly one-in-sixteen after five tosses; one-in-five-hundred after ten tosses; and one-in-five-hundred-thousand after twenty tosses). A string is far better evidence of bias after twenty tosses than it is after three tosses—but if you succumb to the law of small numbers, you might draw sweeping conclusions from even tiny samples of data, just as the British and Germans did about their opponents’ tactics in World War II.

Of course, the law of small numbers applies to more than military tactics. It explains the rise of stereotypes (concluding that all people with a particular trait behave the same way); the dangers of relying on a single interview when deciding among job or college applicants (concluding that interview performance is a reliable guide to job or college performance at large); and the tendency to see short-term patterns in financial stock charts when in fact short-term stock movements almost never follow predictable patterns. The solution is to pay attention not just to the pattern of data, but also to how much data you have. Small samples aren’t just limited in value; they can be counterproductive because the stories they tell are often misleading.

There are many, many more worth reading. Here’s a great chance to build your multidisciplinary skill-set.

The post Scientific Concepts We All Ought To Know appeared first on Farnam Street.

]]>
30451
Who’s in Charge of Our Minds? The Interpreter https://myvibez.link/michael-gazzaniga-the-interpreter/ Mon, 13 Feb 2017 12:00:47 +0000 https://www.farnamstreetblog.com/?p=30298 One of the most fascinating discoveries of modern neuroscience is that the brain is a collection of distinct modules (grouped, highly connected neurons) performing specific functions rather than a unified system. We’ll get to why this is so important when we introduce The Interpreter later on. This modular organization of the human brain is considered …

The post Who’s in Charge of Our Minds? The Interpreter appeared first on Farnam Street.

]]>
One of the most fascinating discoveries of modern neuroscience is that the brain is a collection of distinct modules (grouped, highly connected neurons) performing specific functions rather than a unified system.

We’ll get to why this is so important when we introduce The Interpreter later on.

This modular organization of the human brain is considered one of the key properties that sets us apart from animals. So much so, that it has displaced the theory that it stems from disproportionately bigger brains for our body size.

As neuroscientist Dr. Michael Gazzaniga points out in his wonderful book Who’s In Charge? Free Will and the Science of the Brain, in terms of numbers of cells, the human brain is a proportionately scaled-up primate brain: It is what is expected for a primate of our size and does not possess relatively more neurons. They also found that the ratio between nonneuronal brain cells and neurons in human brain structures is similar to those found in other primates.

So it’s not the size of our brains or the number of neurons, it’s about the patterns of connectivity. As brains scaled up from insect to small mammal to larger mammal, they had to re-organize, for the simple reason that billions of neurons cannot all be connected to one another — some neurons would be way too far apart and too slow to communicate. Our brains would be gigantic and require a massive amount of energy to function.

Instead, our brain specializes and localizes. As Dr. Gazzaniga puts it, “Small local circuits, made of an interconnected group of neurons, are created to perform specific processing jobs and become automatic.” This is an important advance in our efforts to understand the mind.

Dr. Gazzaniga is most famous for his work studying split-brain patients, where many of the discoveries we’re talking about were refined and explored. Split-brain patients give us a natural controlled experiment to find out “what the brain is up to” — and more importantly, how it does its work. What Gazzaniga and his co-researchers found was fascinating.

Emergence

We experience our conscious mind as a single unified thing. But if Gazzaniga & company are right, it most certainly isn’t. How could a “specialized and localized” modular brain give rise to the feeling of “oneness” we feel so strongly about? It would seem there are too many things going on separately and locally:

Our conscious awareness is the mere tip of the iceberg of nonconscious processing. Below our level of awareness is the very busy nonconscious brain hard at work. Not hard for us to imagine are the housekeeping jobs the brain constantly struggles to keep homeostatic mechanisms up and running, such as our heart beating, our lungs breathing, and our temperature just right. Less easy to imagine, but being discovered left and right over the past fifty years, are the myriads of nonconscious processes smoothly putt-putting along. Think about it.

To begin with there are all the automatic visual and other sensory processing we have talked about. In addition, our minds are always being unconsciously biased by positive and negative priming processes, and influenced by category identification processes. In our social world, coalitionary bonding processes, cheater detection processes, and even moral judgment processes (to name only a few) are cranking away below our conscious mechanisms. With increasingly sophisticated testing methods, the number and diversity of identified processes is only going to multiply.

So what’s going on? Who’s controlling all this stuff? The idea is that the brain works more like traffic than a car. No one is controlling it!

It’s due to a principle of complex systems called emergence, and it explains why all of these “specialized and localized” processes can give rise to what seems like a unified mind.

The key to understanding emergence is to understand that there are different levels of organization. My favorite analogy is that of the car, which I have mentioned before. If you look at an isolated car part, such as a cam shaft, you cannot predict that the freeway will be full of traffic at 5:15 PM. Monday through Friday. In fact, you could not even predict the phenomenon of traffic would even occur if you just looked at a brake pad. You cannot analyze traffic at the level of car parts. Did the guy who invented the wheel ever visualize the 405 in Los Angeles on Friday evening? You cannot even analyze traffic at the level of the individual car. When you get a bunch of cars and drivers together, with the variables of location, time, weather, and society, all in the mix, then at that level you can predict traffic. A new set of laws emerge that aren’t predicted from the parts alone.

Emergence, Gazzaniga goes on, is how to understand the brain. Sub-atomic particles, atoms, molecules, cells, neurons, modules, the mind, and a collection of minds (a society) are all different levels of organization, with their own laws that cannot necessarily be predicted from the properties of the level below.

The unified mind we feel present emerges from the thousands of lower-level processes operating in parallel. Most of it is so automatic that we have no idea it’s going on. (Not only does the mind work bottom-up but top down processes also influence it. In other words, what you think influences what you see and hear.)

And when we do start consciously explaining what’s going on — or trying to — we start getting very interesting results. The part of our brain that seeks explanations and infers causality turns out to be a quirky little beast.

The Interpreter

Let’s say you were to see a snake and jump back, automatically and quickly. Did you choose that action? If asked, you’d almost certainly say so, but the truth is more complicated.

If you were to have asked me why I jumped, I would have replied that I thought I’d seen a snake. That answer certainly makes sense, but the truth is I jumped before I was conscious of the snake: I had seen it, I didn’t know I had seen it. My explanation is from post hoc information I have in my conscious system: The facts are that I jumped and that I saw a snake. The reality, however, is that I jumped way before (in a world of milliseconds) I was conscious of the snake. I did not make a conscious decision to jump and then consciously execute it. When I answered that question, I was, in a sense, confabulating: giving a fictitious account of a past event, believing it to be true. The real reason I jumped was an automatic nonconscious reaction to the fear response set into play by the amygdala. The reason I would have confabulated is that our human brains are driven to infer causality. They are driven to explain events that make sense out of the scattered facts. The facts that my conscious brain had to work were that I saw a snake, and I jumped. It did not register that I jumped before I was consciously aware of the snake.

Here’s how it works: A thing happens, we react, we feel something about it, and then we go on explaining it. Sensory information is fed into an explanatory module which Gazzaniga calls The Interpreter, and studying split-brain patients showed him that it resides in the left hemisphere of the brain.

With that knowledge, Gazzaniga and his team were able to do all kinds of clever things to show how ridiculous our Interpreter can often be, especially in split-brain patients.

Take this case of a split-brain patient unconsciously making up a nonsense story when its two hemispheres are shown different images and instructed to choose a related image from a group of pictures. Read carefully:

We showed a split-brain patient two pictures: A chicken claw was shown to his right visual field, so the left hemisphere only saw the claw picture, and a snow scene was shown to the left visual field, so the right hemisphere saw only that. He was then asked to choose a picture from an array of pictures placed in fully view in front of him, which both hemispheres could see.

The left hand pointed to a shovel (which was the most appropriate answer for the snow scene) and the right hand pointed to a chicken (the most appropriate answer for the chicken claw). Then we asked why he chose those items. His left-hemisphere speech center replied, “Oh, that’s simple. The chicken claw goes with the chicken,” easily explaining what it knew. It had seen the chicken claw.

Then, looking down at his left hand pointing to the shovel, without missing a beat, he said, “And you need a shovel to clean out the chicken shed.” Immediately, the left brain, observing the left hand’s response without the knowledge of why it had picked that item, put into a context that would explain it. It interpreted the response in a context consistent with what it knew, and all it knew was: Chicken claw. It knew nothing about the snow scene, but it had to explain the shovel in his left hand. Well, chickens do make a mess, and you have to clean it up. Ah, that’s it! Makes sense.

What was interesting was that the left hemisphere did not say, “I don’t know,” which truly was the correct answer. It made up a post hoc answer that fit the situation. It confabulated, taking cues from what it knew and putting them together in an answer that made sense.

The left hand, responding to the snow Gazzaniga covertly showed the left visual field, pointed to the snow shovel. This all took place in the right hemisphere of the brain (think of it like an “X” — the right hemisphere controls the left side of the body and vice versa). But since it was a split-brain patient, the left hemisphere was not given any of the information about snow.

And yet, the left hemisphere is where the Interpreter resides! So what did the Interpreter do, asked to explain why the shovel was chosen seeing but having no information about snow, only about chickens? It made up a story about shoveling chicken coops!

Gazzaniga goes on to explain several cases of being able to fool the left brain Interpreter over and over, and in often subtle ways.

***

This left-brain module is what we use to explain causality, seeking it for its own sake. The Interpreter, like all of our mental modules, is a wonderful adaption that’s led us to understand and explain causality and the world around us, to our great advantage, but as any good student of social psychology knows, we’ll simply make up a plausible story if we have nothing solid to go on — leading to a narrative fallacy.

This leads to odd results that seem pretty maladaptive, like our tendency to gamble like idiots. (In his famous talk, The Psychology of Human Misjudgment, Charlie Munger calls this mis-gambling compulsion.) But outside of the artifice of the casino, the Interpreter works quite well.

But here’s the catch. In the words of Gazzaniga, “The interpreter is only as good as the information it gets.”

The interpreter receives the results of the computations of a multitude of modules. It does not receive the information that there are multitudes of modules. It does not receive the information about how the modules work. It does not receive the information that there is a pattern-recognition system in the right hemisphere. The interpreter is a module that explains events from the information it does receive.

[…]

The interpreter is receiving data from the domains that monitor the visual system, the somatosensory system, the emotions, and cognitive representations. But as we just saw above, the interpreter is only as good as the information it receives. Lesions or malfunctions in any one of these domain-monitoring systems leads to an array of peculiar neurological conditions that involve the formation of either incomplete or delusional understandings about oneself, other individuals, objects, and the surrounding environment, manifesting in what appears to be bizarre behavior. It no longer seems bizarre, however, once you understand that such behaviors are the result of the interpreter getting no, or bad, information.

This can account for a lot of the ridiculous behavior and ridiculous narratives we see around us. The Interpreter must deal with what it’s given, and as Gazzaniga’s work shows, it can be manipulated and tricked. He calls it “hijacking” — and when the Interpreter is hijacked, it makes pretty bad decisions and generates strange explanations.

Anyone who’s watched a friend acting hilariously when wearing a modern VR headset can see how easy it is to “hijack” one’s sensory perceptions even if the conscious brain “knows” that it’s not real. And of course, Robert Cialdini once famously described this hijacking process as a “click, whirr” reaction to social stimuli. It’s a powerful phenomenon.

***

What can we learn from this?

The story of the multi-modular mind and the Interpreter module shows us that the brain does not have a rational “central command station” — your mind is at the mercy of what it’s fed. The Interpreter is constantly weaving a story of what’s going on around us, applying causal explanations to the data it’s being fed; doing the best job it can with what it’s got.

This is generally useful: a few thousand generations of data has honed our modules to understand the world well enough to keep us surviving and thriving. The job of the brain is to pass on our genes. But that doesn’t mean that it’s always making optimal decisions in the modern world.

We must realize that our brain can be fooled; it can be tricked, played with, and we won’t always realize it immediately. Our Interpreter will weave a plausible story — that’s it’s job.

For this reason, Charlie Munger employs a “two-track” analysis: What are the facts; and where is my brain fooling me? We’re wise to follow suit.

The post Who’s in Charge of Our Minds? The Interpreter appeared first on Farnam Street.

]]>
30298
A Cascade of Sand: Complex Systems in a Complex Time https://myvibez.link/cascade-of-sand-complex-systems/ Wed, 08 Feb 2017 12:00:29 +0000 https://www.farnamstreetblog.com/?p=30316 We live in a world filled with rapid change: governments topple, people rise and fall, and technology has created a connectedness the world has never experienced before. Joshua Cooper Ramo believes this environment has created an “‘avalanche of ceaseless change.” In his book, The Age of the Unthinkable: Why the New World Disorder Constantly Surprises …

The post A Cascade of Sand: Complex Systems in a Complex Time appeared first on Farnam Street.

]]>
We live in a world filled with rapid change: governments topple, people rise and fall, and technology has created a connectedness the world has never experienced before. Joshua Cooper Ramo believes this environment has created an “‘avalanche of ceaseless change.”

In his book, The Age of the Unthinkable: Why the New World Disorder Constantly Surprises Us And What We Can Do About It he outlines what this new world looks like and gives us prescriptions on how best to deal with the disorder around us.

Ramo believes that we are entering a revolutionary age that will render seemingly fortified institutions weak, and weak movements strong. He feels we aren’t well prepared for these radical shifts as those in positions of power tend to have antiquated ideologies in dealing with issues. Generally, they treat anything complex as one dimensional.

Unfortunately, whether they are running corporations or foreign ministries or central banks, some of the best minds of our era are still in thrall to an older way of seeing and thinking. They are making repeated misjudgments about the world. In a way, it’s hard to blame them. Mostly they grew up at a time when the global order could largely be understood in simpler terms, when only nations really mattered, when you could think there was a predictable relationship between what you wanted and what you got. They came of age as part of a tradition that believed all international crises had beginnings and, if managed well, ends.

This is one of the main flaws of traditional thinking about managing conflict/change: we identify a problem, decide on a path forward, and implement that solution. We think in linear terms and see a finish line once the specific problem we have discovered is ‘solved.’

In this day and age (and probably in all days and ages, whether they realized it or not) we have to accept that the finish line is constantly moving and that, in fact, there never will be a finish line. Solving one problem may fix an issue for a time but it tends to also illuminate a litany of new problems. (Many of which were likely already present but hiding under the old problem you just “fixed”.)

In fact, our actions in trying to solve X will sometimes have a cascade effect because the world is actually a series of complex and interconnected systems.

Some great thinkers have spoken about these problems in the past. Ramo highlights some interesting quotes from the Nobel Prize speech that Austrian economist Friedrich August von Hayek gave in 1974, entitled The Pretence of Knowledge.

To treat complex phenomena as if they were simple, to pretend that you could hold the unknowable in the cleverly crafted structure of your ideas —he could think of nothing that was more dangerous. “There is much reason,” Hayek said, “to be apprehensive about the long-run dangers created in a much wider field by the uncritical acceptance of assertions which have the appearance of being scientific.”

Concluding his Nobel speech, Hayek warned, “If man is not to do more harm than good in his efforts to improve the social order, he will have to learn that in this, as in all other fields where essential complexity of an organized kind prevails, he cannot acquire the full knowledge which would make mastery of the events possible.” Politicians and thinkers would be wise not to try to bend history as “the craftsman shapes his handiwork, but rather to cultivate growth by providing the appropriate environment, in the manner a gardener does for his plants.”

This is an important distinction: the idea that we need to be gardeners instead of craftsmen. When we are merely creating something we have a sense of control; we have a plan and an end state. When the shelf is built, it’s built.

Being a gardener is different. You have to prepare the environment; you have to nurture the plants and know when to leave them alone. You have to make sure the environment is hospitable to everything you want to grow (different plants have different needs), and after the harvest you aren’t done. You need to turn the earth and, in essence, start again. There is no end state if you want something to grow.

* * *

So, if most of the threats we face to today are so multifaceted and complex that we can’t use the majority of the strategies that have worked historically, how do we approach the problem? A Danish theoretical physicist named Per Bak had an interesting view of this which he termed self-organized criticality and it comes with an excellent experiment/metaphor that helps to explain the concept.

Bak’s research focused on answering the following question: if you created a cone of sand grain by grain, at what point would you create a little sand avalanche? This breakdown of the cone was inevitable but he wanted to know if he could somehow predict at what point this would happen.

Much like there is a precise temperature that water starts to boil, Bak hypothesized there was a specific point where the stack became unstable, and at this point adding a single grain of sand could trigger the avalanche.

In his work, Bak came to realize that the sandpile was inherently unpredictable. He discovered that there were times, even when the pile had reached a critical state, that an additional grain of sand would have no effect:

“Complex behavior in nature,” Bak explained, “reflects the tendency of large systems to evolve into a poised ‘critical’ state, way out of balance, where minor disturbances may lead to events, called avalanches, of all sizes.” What Bak was trying to study wasn’t simply stacks of sand, but rather the underlying physics of the world. And this was where the sandpile got interesting. He believed that sandpile energy, the energy of systems constantly poised on the edge of unpredictable change, was one of the fundamental forces of nature. He saw it everywhere, from physics (in the way tiny particles amassed and released energy) to the weather (in the assembly of clouds and the hard-to-predict onset of rainstorms) to biology (in the stutter-step evolution of mammals). Bak’s sandpile universe was violent —and history-making. It wasn’t that he didn’t see stability in the world, but that he saw stability as a passing phase, as a pause in a system of incredible —and unmappable —dynamism. Bak’s world was like a constantly spinning revolver in a game of Russian roulette, one random trigger-pull away from explosion.

Traditionally our thinking is very linear and if we start thinking of systems as more like sandpiles, we start to shift into second-order thinking. This means we can no longer assume that a given action will produce a given reaction: it may or may not depending on the precise initial conditions.

This dynamic sandpile energy demands that we accept the basic unpredictability of the global order —one of those intellectual leaps that sounds simple but that immediately junks a great deal of traditional thinking. It also produces (or should produce) a profound psychological shift in what we can and can’t expect from the world. Constant surprise and new ideas? Yes. Stable political order, less complexity, the survival of institutions built for an older world? No.

Ramo isn’t arguing that complex systems are incomprehensible and fundamentally flawed. These systems are manageable, they just require a divergence from the old ways of thinking, the linear way that didn’t account for all the invisible connections in the sand.

Look at something like the Internet; it’s a perfect example of a complex system with a seemingly infinite amount of connections, but it thrives. This system is constantly bombarded with unsuspected risk, but it is so malleable that it has yet to feel the force of an avalanche. The Internet was designed to thrive in a hostile environment and its complexity was embraced. Unfortunately, for every adaptive system like the Internet, there seems to be a maladaptive system, ones so rigid they will surely break in a world of complexity.

The Age of the Unthinkable goes on to show us historical examples of systems that did indeed break; this helps to frame where we have been particularly fragile in the past and where the mistakes in our thinking may have been. In the back half of the book, Ramo outlines strategies he believes will help us become more Antifragile, he calls this “Deep Security”.

Implementing these strategies will likely be met with considerable resistance, many people in positions of power benefit from the systems staying as they are. Revolutions are never easy but, as we’ve shown, even one grain of sand can have a huge impact.

The post A Cascade of Sand: Complex Systems in a Complex Time appeared first on Farnam Street.

]]>
30316
Survival of the Fittest: Groups versus Individuals https://myvibez.link/survival-of-the-fittest/ Thu, 02 Feb 2017 12:00:13 +0000 https://www.farnamstreetblog.com/?p=30256 If ‘survival of the fittest’ is the prime evolutionary tenet, then why do some behaviors that lead to winning or success, seemingly justified by this concept, ultimately leave us cold? Taken from Darwin’s theory of evolution, survival of the fittest is often conceptualized as the advantage that accrues with certain traits, allowing an individual to …

The post Survival of the Fittest: Groups versus Individuals appeared first on Farnam Street.

]]>
If ‘survival of the fittest’ is the prime evolutionary tenet, then why do some behaviors that lead to winning or success, seemingly justified by this concept, ultimately leave us cold?

Taken from Darwin’s theory of evolution, survival of the fittest is often conceptualized as the advantage that accrues with certain traits, allowing an individual to both thrive and survive in their environment by out-competing for limited resources. Qualities such as strength and speed were beneficial to our ancestors, allowing them to survive in demanding environments, and thus our general admiration for these qualities is now understood through this evolutionary lens.

However, in humans this evolutionary concept is often co-opted to defend a wide range of behaviors, not all of them good. Winning by cheating, or stepping on others to achieve goals.

Why is this?

One answer is that humans are not only concerned with our individual survival, but the survival of our group. (Which, of course, leads to improved individual survival, on average.) This relationship between individual and group survival is subject to intense debate among biologists.

Selecting for Unselfishness?

Humans display a wide range of behavior that seems counter-intuitive to the survival of the fittest mentality until you consider that we are an inherently social species, and that keeping our group fit is a wise investment of our time and energy.

One of the behaviors that humans display a lot of is “indirect reciprocity”. Distinguished from “direct reciprocity”, in which I help you and you help me, indirect reciprocity confers no immediate benefit to the one doing the helping. Either I help you, then you help someone else at a later time, or I help you and then someone else, some time in the future, helps me.

Martin A. Nowak and Karl Sigmund have studied this phenomenon in humans for many years. Essentially, they ask the question “How can natural selection promote unselfish behavior?”

Many of their studies have shown that “propensity for indirect reciprocity is widespread. A lot of people choose to do it.”

Furthermore:

Humans are the champions of reciprocity. Experiments and everyday experience alike show that what Adam Smith called ‘our instinct to trade, barter and truck’ relies to a considerable extent on the widespread tendency to return helpful and harmful acts in kind. We do so even if these acts have been directed not to us but to others.

We care about what happens to others, even if the entire event is one that we have no part in. If you consider evolution in terms of survival of the fittest group, rather than individual, this makes sense.

Supporting those who harm others can breed mistrust and instability. And if we don’t trust each other, day to day transactions in our world will be completely undermined. Sending your kids to school, banking, online shopping: We place a huge amount of trust in our fellow humans every day.

If we consider this idea of group survival, we can also see value in a wider range of human attributes and behaviors. It is now not about “I have to be the fittest in every possible way in order to survive“, but recognizing that I want fit people in my group.

In her excellent book, Quiet: The Power of Introverts in a World That Can’t Stop Talking, author Susan Cain explores, among other things, the relevance of introverts to social function. How their contributions benefit the group as a whole. Introverts are people who “like to focus on one task at a time, … listen more than they talk, think before they speak, … [and] tend to dislike conflict.”

Though out of step with the culture of “the extrovert ideal” we are currently living in, introverts contribute significantly to our group fitness. Without them we would be deprived of much of our art and scientific progress.

Cain argues:

Among evolutionary biologists, who tend to subscribe to the vision of lone individuals hell-bent on reproducing their own DNA, the idea that species include individuals whose traits promote group survival is hotly debated and, not long ago, could practically get you kicked out of the academy.

But the idea makes sense. If personality types such as introverts aren’t the fittest for survival, then why did they persist? Possibly because of their value to the group.

Cain looks at the work of Dr. Elaine Aron, who has spent years studying introverts, and is one herself. In explaining the idea of different personality traits as part of group selection in evolution, Aron offers this story in an article posted on her website:

I used to joke that when a group of prehistoric humans were sitting around the campfire and a lion was creeping up on them all, the sensitive ones [introverts] would alert the others to the lion’s prowling and insist that something be done. But the non-sensitive ones [extroverts] would be the ones more likely to go out and face the lion. Hence there are more of them than there are of us, since they are willing and even happy to do impulsive, dangerous things that will kill many of us. But also, they are willing to protect us and hunt for us, if we are not as good at killing large animals, because the group needs us. We have been the healers, trackers, shamans, strategists, and of course the first to sense danger. So together the two types survive better than a group of just one type or the other.

The lesson is this: Groups survive better if they have individuals with different strengths to draw on. The more tools you have, the more likely you can complete a job. The more people you have that are different the more likely you can survive the unexpected.

Which Group?

How then, does one define the group? Who am I willing to help? Arguably, I’m most willing to sacrifice for my children, or family. My immediate little group. But history is full of examples of those who sacrificed significantly for their tribes or sports teams or countries.

We can’t argue that it is just about the survival of our own DNA. That may explain why I will throw myself in front of a speeding car to protect my child, but the beaches of Normandy were stormed by thousands of young, childless men. Soldiers from World War I, when interviewed about why they would jump out of a trench, trying to take a slice of no man’s land, most often said they did it “for the guy next to them”. They initially joined the military out of a sense of “national pride”, or other very non-DNA reasons.

Clearly, human culture is capable of defining “groups” very broadly though a complex system of mythology, creating deep loyalty to “imaginary” groups like sports teams, corporations, nations, or religions.

As technology shrinks our world, our group expands. Technological advancement pushes us into higher degrees of specialization, so that individual survival becomes clearly linked with group survival.

I know that I have a vested interest in doing my part to maintain the health of my group. I am very attached to indoor plumbing and grocery stores, yet don’t participate at all in the giant webs that allow those things to exist in my life. I don’t know anything about the configuration of the municipal sewer system or how to grow raspberries. (Of course, Adam Smith called this process of the individual benefitting the group through specialization the Invisible Hand.)

When we see ourselves as part of a group, we want the group to survive and even thrive. Yet how big can our group be? Is there always an us vs. them? Does our group surviving always have to be at the expense of others? We leave you with the speculation.

 

The post Survival of the Fittest: Groups versus Individuals appeared first on Farnam Street.

]]>
30256
Principles for an Age of Acceleration https://myvibez.link/principles-age-acceleration/ Tue, 10 Jan 2017 12:00:05 +0000 https://www.farnamstreetblog.com/?p=30139 We live in an age where technology is developing at a rate faster than what any individual can keep up with. To survive in an age of acceleration, we need a new way of thinking about technology. *** MIT Media Lab is a creative nerve center where great ideas like One Laptop per Child, LEGO …

The post Principles for an Age of Acceleration appeared first on Farnam Street.

]]>
We live in an age where technology is developing at a rate faster than what any individual can keep up with. To survive in an age of acceleration, we need a new way of thinking about technology.

***

MIT Media Lab is a creative nerve center where great ideas like One Laptop per Child, LEGO Mindstorms, and Scratch programming language have emerged.

Its director, Joi Ito, has done a lot of thinking about how prevailing systems of thought will not be the ones to see us through the coming decades. In his book Whiplash: How to Survive our Faster Future, he notes that sometime late in the last century, technology began to outpace our ability to understand it.

We are blessed (or cursed) to live in interesting times, where high school students regularly use gene editing techniques to invent new life forms, and where advancements in artificial intelligence force policymakers to contemplate widespread, permanent unemployment. Small wonder our old habits of mind—forged in an era of coal, steel, and easy prosperity—fall short. The strong no longer necessarily survive; not all risk needs to be mitigated; and the firm is no longer the optimum organizational unit for our scarce resources.

Ito’s ideas are not specific to our moment in history, but adaptive responses to a world with certain characteristics:

1. Asymmetry
In our era, effects are no longer proportional to the size of their source. The biggest change-makers of the future are the small players: “start-ups and rogues, breakaways and indie labs.”

2. Complexity
The level of complexity is shaped by four inputs, all of which are extraordinarily high in today’s world: heterogeneity, interconnection, interdependency and adaptation.

3. Uncertainty
Not knowing is okay. In fact, we’ve entered an age where the admission of ignorance offers strategic advantages over expending resources–subcommittees and think tanks and sales forecasts—toward the increasingly futile goal of forecasting future events.”

When these three conditions are in place, certain guiding principles serve us best. In his book, Ito shares some of the maxims that organize his “anti-disciplinary” Media Lab in a complex and uncertain world.

Emergence over Authority

Complex systems show properties that their individual parts don’t possess, and we call this process “emergence”. For example, life is an emergent property of chemistry. Groups of people also produce a wondrous variety of emergent behaviors—languages, economies, scientific revolutions—when each intellect contributes to a whole that is beyond the abilities of any one person.

Some organizational structures encourage this kind of creativity more than others. Authoritarian systems only allow for incremental changes, whereas nonlinear innovation emerges from decentralized networks with a low barrier to entry. As Stephen Johnson describes in Emergence, when you plug more minds into the system, “isolated hunches and private obsessions coalesce into a new way of looking at the world, shared by thousands of individuals.”

Synthetic biology best exemplifies the type of new field that can arise from emergence. Not to be confused with genetic engineering, which modifies existing organisms, synthetic biology aims to create entirely new forms of life.

Having emerged in the era of open-source software, synthetic biology is becoming an exercise in radical collaboration between students, professors, and a legion of citizen scientists who call themselves biohackers. Emergence has made its way into the lab.

As a result, the cost of sequencing DNA is plummeting at six times the rate of Moore’s Law, and a large Registry of Standard Biological Parts, or BioBricks, now offers genetic components that perform well-understood functions in whatever organism is being created, like a block of Lego.

There is still a place for leaders in an organization that fosters emergence, but the role may feel unfamiliar to a manager from a traditional hierarchy. The new leader spends less time leading and more time “gardening”—pruning the hedges, watering the flowers, and otherwise getting out of the way. (As biologist Lewis Thomas puts it, a great leader must get the air right.)

Pull over Push

“Push” strategies involve directing resources from a central source to sites where, in the leader’s estimation, they are likely to be needed or useful. In contrast, projects that use “pull” strategies attract intellectual, financial and physical resources to themselves just as they are needed, rather than stockpiling them.

Ito is a proponent of the sharing economy, through which a startup might tap into the global community of freelancers and volunteers for a custom-made task force instead of hiring permanent teams of designers, programmers or engineers.

Here’s a great example:

When the Fukushima nuclear meltdown happened, Ito was living just outside of Tokyo. The Japanese government took a command-and-control (“push”) approach to the disaster, in which information would slowly climb up the hierarchy, and decisions would then be passed down stepwise to the ground-level workers.

It soon became clear that the government was not equipped to assess or communicate the radioactivity levels of each neighborhood, so Ito and his friends took the problem into their own hands. Pulling in expertise and money from far-flung scientists and entrepreneurs, they formed a citizen science group called Safecast, which built its own GPS-equipped Geiger counters and strapped them to cars for faster monitoring. They launched a website that continues to share data – more than 50 million data points so far – about local environments.

To benefit from these kinds of “pull” strategies, it pays to foster an environment that is rich with weak ties – a wide network of acquaintances from which to draw just-in-time knowledge and resources, as Ito did with Safecast.

Compasses over Maps

Detailed maps can be more misleading than useful in a fast-changing world, where a compass is the tool of choice. In the same way, organizations that plan exhaustively will be outpaced in an accelerating world by ones that are guided by a more encompassing mission.

A map implies a straightforward knowledge of the terrain, and the existence of an optimum route; the compass is a far more flexible tool and requires the user to employ creativity and autonomy in discovering his or her own path.

One advantage to the compass approach is that when a roadblock inevitably crops up, there is no need to go back to the beginning to form another plan or draw up multiple plans for each contingency. You simply navigate around the obstacle and continue in your chosen direction.

It is impossible, in any case, to make detailed plans for a complex and creative organization. The way to set a compass direction for a company is by creating a culture—or set of mythologies—that animates the parts in a common worldview.

In the case of the MIT Media Lab, that compass heading is described in three values: “Uniqueness, Impact, and Magic”. Uniqueness means that if someone is working on a similar project elsewhere, the lab moves on.

Rather than working to discover knowledge for its own sake, the lab works in the service of Impact, through start-ups and physical creations. It was expressed in the lab’s motto “Deploy or die”, but Barack Obama suggested they work on their messaging, and Ito shortened it to “Deploy.”

The Magic element, though hard to define, speaks to the delight that playful originality so often awakens.

Both students and faculty at the lab are there to learn, but not necessarily to be “educated”. Learning is something you pursue for yourself, after all, whereas education is something that’s done to you. The result is “agile, scrappy, permissionless innovation”.

The new job landscape requires more creativity from everybody. The people who will be most successful in this environment will be the ones who ask questions, trust their instincts, and refuse to follow the rules when the rules get in their way.

Other principles discussed in Whiplash include Risk over Safety, Disobedience over Compliance, Practice over Theory, Diversity over Ability, Resilience over Strength, and Systems over Objects.

The post Principles for an Age of Acceleration appeared first on Farnam Street.

]]>
30139
The Founder Principle: A Wonderful Idea from Biology https://myvibez.link/the-founder-principle/ Thu, 24 Nov 2016 12:00:26 +0000 https://www.farnamstreetblog.com/?p=29801 We’ve all been taught natural selection; the mechanism by which species evolve through differential reproductive success. Most of us are familiar with the idea that random mutations in DNA cause variances in offspring, some of which survive more frequently than others. However, this is only part of the story. Sometimes other situations cause massive changes in …

The post The Founder Principle: A Wonderful Idea from Biology appeared first on Farnam Street.

]]>
We’ve all been taught natural selection; the mechanism by which species evolve through differential reproductive success. Most of us are familiar with the idea that random mutations in DNA cause variances in offspring, some of which survive more frequently than others. However, this is only part of the story.

Sometimes other situations cause massive changes in species populations, and they’re often more nuanced and tough to spot.

One such concept comes from one of the most influential biologists in history, Ernst Mayr. He called it The Founder Principle, a mechanism by which new species are created by a splintered population; often with lower genetic diversity and an increased risk of extinction.

In the brilliant The Song of the Dodo: Island Biography in an Age of ExtinctionDavid Quammen gives us not only the stories of many brilliant biological naturalists including Mayr, but we also get a deep dive into the core concepts of evolution and extinction, including the founder principle.

Quammen begins by outlining the basic idea:

When a new population is founded in an isolated place, the founders usually constitute a numerically tiny group – a handful of lonely pioneers, or just a pair, or maybe no more than one pregnant female. Descending from such a small number of founders, the new population will carry only a minuscule and to some extent random sample of the gene pool of the base population. The sample will most likely be unrepresentative, encompassing less genetic diversity than the larger pool. This effect shows itself whenever a small sample is taken from a large aggregation of diversity; whether the aggregation consists of genes, colored gum balls, M&M’s, the cards of a deck, or any other collection of varied items, a small sample will usually contain less diversity than the whole.

Why does the founder principle happen? It’s basically applied probability. Perhaps an example will help illuminate the concept.

Think of yourself playing a game of poker (five card draw) with a friend. The deck of cards is separated into four suits: Diamonds, hearts, clubs and spades, each suit having 13 cards for a total of 52 cards.

Now look at your hand of five cards. Do you have one card from each suit? Maybe. Are all five cards from the same suit? Probably not, but it is possible. Will you get the ace of spades? Maybe, but not likely.

This is a good metaphor for how the founder principle works. The gene pool carried by a small group of founders is unlikely to be precisely representative of the gene pool of the larger group. In some rare cases it will be very unrepresentative, like you getting dealt a straight flush.

It starts to get interesting when this founder population starts to reproduce, and genetic drift causes the new population to diverge significantly from its ancestors. Quammen explains:

Already isolated geographically from its base population, the pioneer population now starts drifting away genetically. Over the course of generations, its gene pool becomes more and more different from the gene pool of the base population – different both as to the array of alleles (that is, the variant forms of a given gene) and as to the commonness of each allele.

The founder population, in some cases, will become so different that it can no longer mate with the original population. This new species may even be a competitor for resources if the two populations are ever reintroduced. (Say, if a land bridge is created between two islands, or humans bring two species back in contact.)

Going back to our card metaphor, let’s pretend that you and your friend are playing with four decks of cards — 208 total cards. Say we randomly pulled out forty cards from those decks. If there are absolutely no kings in the forty cards you are playing with, you will never be able to create a royal flush (ace+king+queen+jack+10 of the same suit). It doesn’t matter how the cards are dealt, you can never make a royal flush with no kings.

Thus it is with species: If a splintered-off population isn’t carrying a specific gene variant (allele), that variant can never be represented in the newly created population, no matter how prolific that gene may have been in the original population. It’s gone. And as the rarest variants disappear, the new population becomes increasingly unlike the old one, especially if the new population is small.

Some alleles are common within a population, some are rare. If the population is large, with thousands or millions of parents producing thousands or millions of offspring, the rare alleles as well as the common ones will usually be passed along. Chance operation at high numbers tends to produce stable results, and the proportions of rarity and commonness will hold steady. If the population is small, though, the rare alleles will most likely disappear […] As it loses its rare alleles by the wayside, a small pioneer population will become increasingly unlike the base population from which it derived.

Some of this genetic loss may be positive (a gene that causes a rare disease may be missing), some may be negative (a gene for a useful attribute may be missing) and some may be neutral.

The neutral ones are the most interesting: A neutral gene at one point in time may become a useful gene at another point. It’s like playing a round of poker where 8’s are suddenly declared “wild,” and that card suddenly becomes much more important than it was the hand before. The same goes for animal traits.

Take a mammal population living on an island, having lost all of its ability to swim. That won’t mean much if all is well and it is never required to swim. But the moment there is a natural disaster such as a fire, having the ability to swim the short distance to the mainland could be the difference between survival or extinction.

That’s why the founder principle is so dangerous: The loss of genetic diversity often means losing valuable survival traits. Quammen explains:

Genetic drift compounds the founder-effect problem, stripping a small population of the genetic variation that it needs to continue evolving. Without that variation, the population stiffens toward uniformity. It becomes less capable of adaptive response. There may be no manifest disadvantages in uniformity so long as environmental circumstances remain stable; but when circumstances are disrupted, the population won’t be capable of evolutionary adjustment. If the disruption is drastic, the population may go extinct.

This loss of adaptability is one of the two major issues caused by the founder principle, the second being inbreeding depression. A founder population may have no choice but to only breed within its population and a symptom of too much inbreeding is the manifestation of harmful genetic variants among inbred individuals. (One reason humans consider incest a dangerous activity.) This too increases the fragility of species and decreases their ability to evolve.

The founder principle is just one of many amazing ideas in The Song of the Dodo. In fact, we at Farnam Street feel the book is so important that it made our list of books we recommend to improve your general knowledge of the world and it was the first book we picked for our members-only reading group.

If you have already read this book and want more we suggest Quammen’s The Reluctant Mr. Darwin or his equally thought provoking Spillover: Animal Infections and the Next Human Pandemic. Another wonderful and readable book on species evolution is The Beak of the Finch, by Jonathan Weiner.

The post The Founder Principle: A Wonderful Idea from Biology appeared first on Farnam Street.

]]>
29801
The Island of Knowledge: Science and the Meaning of Life https://myvibez.link/the-island-of-knowledge/ Mon, 03 Oct 2016 11:00:36 +0000 https://www.farnamstreetblog.com/?p=29107 “As the Island of Knowledge grows, so do the shores of our ignorance—the boundary between the known and unknown. Learning more about the world doesn’t lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to …

The post The Island of Knowledge: Science and the Meaning of Life appeared first on Farnam Street.

]]>
“As the Island of Knowledge grows, so do the shores of our ignorance—the boundary between the known and unknown. Learning more about the world doesn’t lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.”

***

Common across human history is our longing to better understand the world we live in, and how it works. But how much can we actually know about the world?

In his book, The Island of Knowledge: The Limits of Science and the Search for Meaning, Physicist Marcelo Gleiser traces our progress of modern science in the pursuit to the most fundamental questions on existence, the origin of the universe, and the limits of knowledge.

What we know of the world is limited by what we can see and what we can describe, but our tools have evolved over the years to reveal ever more pleats into our fabric of knowledge. Gleiser celebrates this persistent struggle to understand our place in the world and travels our history from ancient knowledge to our current understanding.

While science is not the only way to see and describe the world we live in, it is a response to the questions on who we are, where we are, and how we got here. “Science speaks directly to our humanity, to our quest for light, ever more light.

To move forward, science needs to fail, which runs counter to our human desire for certainty. “We are surrounded by horizons, by incompleteness.” Rather than give up, we struggle along a scale of progress. What makes us human is this journey to understand more about the mysteries of the world and explain them with reason. This is the core of our nature.

While the pursuit is never ending, the curious journey offers insight not just into the natural world, but insight into ourselves.

“What I see in Nature is a magnificent structure that we can comprehend only
very imperfectly,
and that must fill a thinking person with a feeling of humility.”
— Albert Einstein

We tend to think that what we see is all there is — that there is nothing we cannot see. We know it isn’t true when we stop and think, yet we still get lulled into a trap of omniscience.

Science is thus limited, offering only part of the story — the part we can see and measure. The other part remains beyond our immediate reach.

What we see of the world,” Gleiser begins, “is only a sliver of what’s out there.”

There is much that is invisible to the eye, even when we augment our sensorial perception with telescopes, microscopes, and other tools of exploration. Like our senses, every instrument has a range. Because much of Nature remains hidden from us, our view of the world is based only on the fraction of reality that we can measure and analyze. Science, as our narrative describing what we see and what we conjecture exists in the natural world, is thus necessarily limited, telling only part of the story. … We strive toward knowledge, always more knowledge, but must understand that we are, and will remain, surrounded by mystery. This view is neither antiscientific nor defeatist. … Quite the contrary, it is the flirting with this mystery, the urge to go beyond the boundaries of the known, that feeds our creative impulse, that makes us want to know more.

While we may broadly understand the map of what we call reality, we fail to understand its terrain. Reality, Gleiser argues, “is an ever-shifting mosaic of ideas.”

However…

The incompleteness of knowledge and the limits of our scientific worldview only add to the richness of our search for meaning, as they align science with our human fallibility and aspirations.

What we call reality is a (necessarily) limited synthesis. It is certainly our reality, as it must be, but it is not the entire reality itself:

My perception of the world around me, as cognitive neuroscience teaches us, is synthesized within different regions of my brain. What I call reality results from the integrated sum of countless stimuli collected through my five senses, brought from the outside into my head via my nervous system. Cognition, the awareness of being here now, is a fabrication of a vast set of chemicals flowing through myriad synaptic connections between my neurons. … We have little understanding as to how exactly this neuronal choreography engenders us with a sense of being. We go on with our everyday activities convinced that we can separate ourselves from our surroundings and construct an objective view of reality.

The brain is a great filtering tool, deaf and blind to vast amounts of information around us that offer no evolutionary advantage. Part of it we can see and simply ignore. Other parts, like dust particles and bacteria, go unseen because of limitations of our sensory tools.

As the Fox said to the Little Prince in Antoine de Saint-Exupery’s fable, “What is essential is invisible to the eye.” There is no better example than oxygen.

Science has increased our view. Our measurement tools and instruments can see bacteria and radiation, subatomic particles and more. However precise these tools have become, their view is still limited.

There is no such thing as an exact measurement. Every measurement must be stated within its precision and quoted together with “error bars” estimating the magnitude of errors. High-precision measurements are simply measurements with small error bars or high confidence levels; there are no perfect, zero-error measurements.

[…]

Technology limits how deeply experiments can probe into physical reality. That is to say, machines determine what we can measure and thus what scientists can learn about the Universe and ourselves. Being human inventions, machines depend on our creativity and available resources. When successful, they measure with ever-higher accuracy and on occasion may also reveal the unexpected.

“All models are wrong, some are useful.”
— George Box

What we know about the world is only what we can detect and measure — even if we improve our “detecting and measuring” as time goes along. And thus we make our conclusions of reality on what we can currently “see.”

We see much more than Galileo, but we can’t see it all. And this restriction is not limited to measurements: speculative theories and models that extrapolate into unknown realms of physical reality must also rely on current knowledge. When there is no data to guide intuition, scientists impose a “compatibility” criterion: any new theory attempting to extrapolate beyond tested ground should, in the proper limit, reproduce current knowledge.

[…]

If large portions of the world remain unseen or inaccessible to us, we must consider the meaning of the word “reality” with great care. We must consider whether there is such a thing as an “ultimate reality” out there — the final substrate of all there is — and, if so, whether we can ever hope to grasp it in its totality.

[…]

We thus must ask whether grasping reality’s most fundamental nature is just a matter of pushing the limits of science or whether we are being quite naive about what science can and can’t do.

Here is another way of thinking about this: if someone perceives the world through her senses only (as most people do), and another amplifies her perception through the use of instrumentation, who can legitimately claim to have a truer sense of reality? One “sees” microscopic bacteria, faraway galaxies, and subatomic particles, while the other is completely blind to such entities. Clearly they “see” different things and—if they take what they see literally—will conclude that the world, or at least the nature of physical reality, is very different.

Asking who is right misses the point, although surely the person using tools can see further into the nature of things. Indeed, to see more clearly what makes up the world and, in the process to make more sense of it and ourselves is the main motivation to push the boundaries of knowledge. … What we call “real” is contingent on how deeply we are able to probe reality. Even if there is such thing as the true or ultimate nature of reality, all we have is what we can know of it.

[…]

Our perception of what is real evolves with the instruments we use to probe Nature. Gradually, some of what was unknown becomes known. For this reason, what we call “reality” is always changing. … The version of reality we might call “true” at one time will not remain true at another. … Given that our instruments will always evolve, tomorrow’s reality will necessarily include entitles not known to exist today. … More to the point, as long as technology advances—and there is no reason to suppose that it will ever stop advancing for as long as we are around—we cannot foresee an end to this quest. The ultimate truth is elusive, a phantom.

Gleiser makes his point with a beautiful metaphor. The Island of Knowledge.

Consider, then, the sum total of our accumulated knowledge as constituting an island, which I call the “Island of Knowledge.” … A vast ocean surrounds the Island of Knowledge, the unexplored ocean of the unknown, hiding countless tantalizing mysteries.

The Island of Knowledge grows as we learn more about the world and ourselves. And as the island grows, so too “do the shores of our ignorance—the boundary between the known and unknown.”

Learning more about the world doesn’t lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.

As we move forward we must remember that despite our quest, the shores of our ignorance grow as the Island of Knowledge grows. And while we will struggle with the fact that not all questions will have answers, we will continue to progress. “It is also good to remember,” Gleiser writes, “that science only covers part of the Island.”

Richard Feynman has pointed out before that science can only answer the subset of question that go, roughly, “If I do this, what will happen?” Answers to questions like Why do the rules operate that way? and Should I do it? are not really questions of scientific nature — they are moral, human questions, if they are knowable at all.

There are many ways of understanding and knowing that should, ideally, feed each other. “We are,” Gleiser concludes, “multidimensional creatures and search for answers in many, complementary ways. Each serves a purpose and we need them all.”

“The quest must go on. The quest is what makes us matter: to search for more answers, knowing that the significant ones will often generate surprising new questions.”

The Island of Knowledge is a wide-ranging tour through scientific history from planetary motions to modern scientific theories and how they affect our ideas on what is knowable.

The post The Island of Knowledge: Science and the Meaning of Life appeared first on Farnam Street.

]]>
29107
Merchants Of Doubt: How The Tobacco Strategy Obscures the Realities of Global Warming https://myvibez.link/merchants-of-doubt/ Tue, 09 Feb 2016 12:00:53 +0000 https://www.farnamstreetblog.com/?p=24107 There will always be those who try to challenge growing scientific consensus — indeed the challenge is fundamental to science. Motives, however, matter and not everyone has good intentions. *** Naomi Oreskes and Erik Conway’s masterful work Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming, was recommended …

The post Merchants Of Doubt: How The Tobacco Strategy Obscures the Realities of Global Warming appeared first on Farnam Street.

]]>
There will always be those who try to challenge growing scientific consensus — indeed the challenge is fundamental to science. Motives, however, matter and not everyone has good intentions.

***

Naomi Oreskes and Erik Conway’s masterful work Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming, was recommended by Elon Musk.

The book illuminates how the tobacco industry created doubt and kept the controversy alive well past scientific consensus. They call this the Tobacco Strategy. And the same playbook is happening all over again. This time with Global Warming.

Merchants of Doubt

The goal of the Tobacco Strategy is to create doubt about the causal link to protect the interests of incumbents.

Millions of pages of documents released during tobacco litigation demonstrate these links. They show the crucial role that scientists played in sowing doubt about the links between smoking and health risks. These documents— which have scarcely been studied except by lawyers and a handful of academics— also show that the same strategy was applied not only to global warming, but to a laundry list of environmental and health concerns, including asbestos, secondhand smoke, acid rain, and the ozone hole.

Interestingly, not only are the tactics the same when it comes to Global Warming, but so are the people.

They used their scientific credentials to present themselves as authorities, and they used their authority to try to discredit any science they didn’t like.

Over the course of more than twenty years, these men did almost no original scientific research on any of the issues on which they weighed in. Once they had been prominent researchers, but by the time they turned to the topics of our story, they were mostly attacking the work and the reputations of others. In fact, on every issue, they were on the wrong side of the scientific consensus. Smoking does kill— both directly and indirectly. Pollution does cause acid rain. Volcanoes are not the cause of the ozone hole. Our seas are rising and our glaciers are melting because of the mounting effects of greenhouse gases in the atmosphere, produced by burning fossil fuels. Yet, for years the press quoted these men as experts, and politicians listened to them, using their claims as justification for inaction.

December 15, 1953, was a fateful day. A few months earlier, researchers at the Sloan-Kettering Institute in New York City had demonstrated that cigarette tar painted on the skin of mice caused fatal cancers. This work had attracted an enormous amount of press attention: the New York Times and Life magazine had both covered it, and Reader’s Digest— the most widely read publication in the world— ran a piece entitled “Cancer by the Carton.” Perhaps the journalists and editors were impressed by the scientific paper’s dramatic concluding sentences: “Such studies, in view of the corollary clinical data relating smoking to various types of cancer, appear urgent. They may not only result in furthering our knowledge of carcinogens, but in promoting some practical aspects of cancer prevention.”

These findings, however, shouldn’t have been a surprise. We’re often blinded by a ‘bad people can do no right’ line of thought.

German scientists had shown in the 1930s that cigarette smoking caused lung cancer, and the Nazi government had run major antismoking campaigns; Adolf Hitler forbade smoking in his presence. However, the German scientific work was tainted by its Nazi associations, and to some extent ignored, if not actually suppressed, after the war; it had taken some time to be rediscovered and independently confirmed. Now, however, American researchers— not Nazis— were calling the matter “urgent,” and the news media were reporting it.  “Cancer by the carton” was not a slogan the tobacco industry would embrace.

 

With the mounting evidence, the tobacco industry was thrown into a panic.

 

So industry executives made a fateful decision, one that would later become the basis on which a federal judge would find the industry guilty of conspiracy to commit fraud— a massive and ongoing fraud to deceive the American public about the health effects of smoking. The decision was to hire a public relations firm to challenge the scientific evidence that smoking could kill you.

On that December morning (December 15th), the presidents of four of America’s largest tobacco companies— American Tobacco, Benson and Hedges, Philip Morris, and U.S. Tobacco— met at the venerable Plaza Hotel in New York City. The French Renaissance chateau-style building— in which unaccompanied ladies were not permitted in its famous Oak Room bar— was a fitting place for the task at hand: the protection of one of America’s oldest and most powerful industries. The man they had come to meet was equally powerful: John Hill, founder and CEO of one of America’s largest and most effective public relations firms, Hill and Knowlton.

The four company presidents— as well as the CEOs of R. J. Reynolds and Brown and Williamson— had agreed to cooperate on a public relations program to defend their product. They would work together to convince the public that there was “no sound scientific basis for the charges,” and that the recent reports were simply “sensational accusations” made by publicity-seeking scientists hoping to attract more funds for their research. They would not sit idly by while their product was vilified; instead, they would create a Tobacco Industry Committee for Public Information to supply a “positive” and “entirely ‘pro-cigarette’” message to counter the anti-cigarette scientific one. As the U.S. Department of Justice would later put it, they decided “to deceive the American public about the health effects of smoking.”

At first, the companies didn’t think they needed to fund new scientific research, thinking it would be sufficient to “disseminate information on hand.” John Hill disagreed, “emphatically warn[ing] … that they should … sponsor additional research,” and that this would be a long-term project. He also suggested including the word “research” in the title of their new committee, because a pro-cigarette message would need science to back it up. At the end of the day, Hill concluded, “scientific doubts must remain.” It would be his job to ensure it.

Over the next half century, the industry did what Hill and Knowlton advised. They created the “Tobacco Industry Research Committee” to challenge the mounting scientific evidence of the harms of tobacco. They funded alternative research to cast doubt on the tobacco-cancer link. They conducted polls to gauge public opinion and used the results to guide campaigns to sway it. They distributed pamphlets and booklets to doctors, the media, policy makers, and the general public insisting there was no cause for alarm.

The industry’s position was that there was “no proof” that tobacco was bad, and they fostered that position by manufacturing a “debate,” convincing the mass media that responsible journalists had an obligation to present “both sides” of it.

Of course there was more to it than that.

The industry did not leave it to journalists to seek out “all the facts.” They made sure they got them. The so-called balance campaign involved aggressive dissemination and promotion to editors and publishers of “information” that supported the industry’s position. But if the science was firm, how could they do that? Was the science firm?

The answer is yes, but. A scientific discovery is not an event; it’s a process, and often it takes time for the full picture to come into clear focus.  By the late 1950s, mounting experimental and epidemiological data linked tobacco with cancer— which is why the industry took action to oppose it. In private, executives acknowledged this evidence. In hindsight it is fair to say— and science historians have said— that the link was already established beyond a reasonable doubt. Certainly no one could honestly say that science showed that smoking was safe.

But science involves many details, many of which remained unclear, such as why some smokers get lung cancer and others do not (a question that remains incompletely answered today). So some scientists remained skeptical.

[…]

The industry made its case in part by cherry-picking data and focusing on unexplained or anomalous details. No one in 1954 would have claimed that everything that needed to be known about smoking and cancer was known, and the industry exploited this normal scientific honesty to spin unreasonable doubt.

[…]

The industry had realized that you could create the impression of controversy simply by asking questions, even if you actually knew the answers and they didn’t help your case. And so the industry began to transmogrify emerging scientific consensus into raging scientific “debate.”

Merchants of Doubt is a fascinating look at how the process for sowing doubt in the minds of people remains the same today as it was in the 1950s. After all, if it ain’t broke, don’t fix it.

The post Merchants Of Doubt: How The Tobacco Strategy Obscures the Realities of Global Warming appeared first on Farnam Street.

]]>
24107
Karl Popper: The Line Between Science and Pseudoscience https://myvibez.link/karl-popper-on-science-pseudoscience/ Thu, 28 Jan 2016 12:00:50 +0000 https://www.farnamstreetblog.com/?p=24638 It’s not immediately clear to the layman what the essential difference is between science and something masquerading as science: pseudoscience. The distinction between the two is at the core of what comprises human knowledge: How do we actually know something to be true? Sir Karl Popper (1902-1994), the scientific philosopher, was interested in the same problem. How …

The post Karl Popper: The Line Between Science and Pseudoscience appeared first on Farnam Street.

]]>
It’s not immediately clear to the layman what the essential difference is between science and something masquerading as science: pseudoscience. The distinction between the two is at the core of what comprises human knowledge: How do we actually know something to be true?

Sir Karl Popper (1902-1994), the scientific philosopher, was interested in the same problem. How do we actually define the scientific process? How do we know which theories can be said to be truly explanatory?

3833724834_397c34132c_z

He began addressing it in a lecture, which is printed in the book Conjectures and Refutations: The Growth of Scientific Knowledge:

When I received the list of participants in this course and realized that I had been asked to speak to philosophical colleagues I thought, after some hesitation and consultation, that you would probably prefer me to speak about those problems which interest me most, and about those developments with which I am most intimately acquainted. I therefore decided to do what I have never done before: to give you a report on my own work in the philosophy of science, since the autumn of 1919 when I first began to grapple with the problem, ‘When should a theory be ranked as scientific?’ or ‘Is there a criterion for the scientific character or status of a theory?’

Popper saw a problem with the number of theories he considered non-scientific. Theories that, on their surface, seemed to have a lot in common with good, hard, rigorous science. But the question of how we decide which theories are compatible with the scientific method, and those which are not, was harder than it seemed.

What’s Scientific?

It is most common to say that science is done by collecting observations and grinding out theories from them. Charles Darwin said, after working long and hard on the problem (see the work of thinking):

My mind seems to have become a kind of machine for grinding general laws out of large collections of facts.

This is a popularly accepted notion. We observe, observe, and observe, and we look for theories to best explain the mass of facts. (Although even this is not really true: Popper points out that we must start with some a priori knowledge to be able to generate new knowledge. Observation is always done with some hypotheses in mind — we can’t understand the world from a totally blank slate.)

The problem, as Popper saw it, is that some bodies of knowledge, more properly named pseudoscience, would be considered scientific if the “Observe & Deduce” operating definition were left alone. For example, a believing astrologist can ably provide you with “evidence” that their theories are sound. The biographical information of a great many people can be explained this way, they’d say.

The astrologist would tell you, for example, about how “Leos” seek to be the centre of attention; ambitious, strong, seeking the limelight. As proof, they might follow up with a host of real-life Leos: World-leaders, celebrities, politicians, and so on. In some sense, the theory would hold up. The observations could be explained by the theory, which is how science works, right?

Popper lived during a time when psychoanalytic theories were all the rage at just the same time Einstein was laying out a new foundation for the physical sciences with the concept of relativity. What made Popper uncomfortable were comparisons between the two. Why did he feel so uneasy putting Marxist theories and Freudian psychology in the same category of knowledge as Einstein’s Relativity? Did all three not have vast explanatory power in the world? Each theory’s proponents certainly believed so, but Popper was not satisfied.

It was during the summer of 1919 that I began to feel more and more dissatisfied with these three theories–the Marxist theory of history, psychoanalysis, and individual psychology; and I began to feel dubious about their claims to scientific status. My problem perhaps first took the simple form, ‘What is wrong with Marxism, psycho-analysis, and individual psychology? Why are they so different from physical theories, from Newton’s theory, and especially from the theory of relativity?’

I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory.

Whatever happened always confirmed it. Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth; who refused to see it, either because it was against their class interest, or because of their repressions which were still ‘un-analysed’ and crying aloud for treatment.

The Importance of Falsifiability

Here was the salient problem: The proponents of these new sciences saw validations and verifications of their theories everywhere. If you were having trouble as an adult, it could always be explained by something your mother or father had done to you when you were young, some repressed something-or-other that hadn’t been analyzed and solved. They were confirmation bias machines.

What was the missing element? Popper had figured it out before long: The non-scientific theories could not be falsified. They were not testable in a legitimate way. There was no possible objection that could be raised which would show the theory to be wrong.

In a true science, the following statement can be easily made: “If happens, it would show demonstrably that theory is not true.” We can then design an experiment, a physical one or sometimes a simple thought experiment, to figure out if actually does happen It’s the opposite of looking for verification; you must try to show the theory is incorrect, and if you fail to do so, thereby strengthen it.

Pseudosciences cannot and do not do this—they are not strong enough to hold up. As an example, Popper discussed Freud’s theories of the mind in relation to Alfred Adler’s so-called “individual psychology,” which was popular at the time:

I may illustrate this by two very different examples of human behaviour: that of a man who pushes a child into the water with the intention of drowning it; and that of a man who sacrifices his life in an attempt to save the child. Each of these two cases can be explained with equal ease in Freudian and in Adlerian terms. According to Freud the first man suffered from repression (say, of some component of his Oedipus complex), while the second man had achieved sublimation. According to Adler the first man suffered from feelings of inferiority (producing perhaps the need to prove to himself that he dared to commit some crime), and so did the second man (whose need was to prove to himself that he dared to rescue the child). I could not think of any human behaviour which could not be interpreted in terms of either theory. It was precisely this fact–that they always fitted, that they were always confirmed–which in the eyes of their admirers constituted the strongest argument in favour of these theories. It began to dawn on me that this apparent strength was in fact their weakness.

Popper contrasted these theories against Relativity, which made specific, verifiable predictions, giving the conditions under which the predictions could be shown false. It turned out that Einstein’s predictions came to be true when tested, thus verifying the theory through attempts to falsify it. But the essential nature of the theory gave grounds under which it could have been wrong. To this day, physicists seek to figure out where Relativity breaks down in order to come to a more fundamental understanding of physical reality. And while the theory may eventually be proven incomplete or a special case of a more general phenomenon, it has still made accurate, testable predictions that have led to practical breakthroughs.

Thus, in Popper’s words, science requires testability“If observation shows that the predicted effect is definitely absent, then the theory is simply refuted.”  This means a good theory must have an element of risk to it. It must be able to be proven wrong under stated conditions.

Popper’s Essential Conclusions

From there, Popper laid out his essential conclusions, which are useful to any thinker trying to figure out if a theory they hold dear is something that can be put in the scientific realm:

1. It is easy to obtain confirmations, or verifications, for nearly every theory–if we look for confirmations.

2. Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory–an event which would have refuted the theory.

3. Every ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.

4. A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.

5. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

6. Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of ‘corroborating evidence’.)

7. Some genuinely testable theories, when found to be false, are still upheld by their admirers–for example by introducing ad hoc some auxiliary assumption, or by re-interpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a ‘conventionalist twist’ or a ‘conventionalist stratagem’.)

One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.

Finally, Popper was careful to say that it is impossible to prove that Freudianism was not true, at least in part. But we can say that we simply don’t know whether it’s true because it does not make specific testable predictions. It may have many kernels of truth in it, but we can’t tell. This is the essential “line of demarcation, as Popper called it, between science and pseudoscience.

The post Karl Popper: The Line Between Science and Pseudoscience appeared first on Farnam Street.

]]>
24638
What Can Chain Letters Teach us about Natural Selection? https://myvibez.link/natural-selection-of-chain-letters/ Wed, 20 Jan 2016 12:29:55 +0000 https://www.farnamstreetblog.com/?p=24173 “It is important to understand that none of these replicating entities is consciously interested in getting itself duplicated. But it will just happen that the world becomes filled with replicators that are more efficient.” *** In 1859, Charles Darwin first described his theory of evolution through natural selection in The Origin of Species. Here we are, …

The post What Can Chain Letters Teach us about Natural Selection? appeared first on Farnam Street.

]]>
“It is important to understand that none of these replicating entities is consciously interested in getting itself duplicated. But it will just happen that the world becomes filled with replicators that are more efficient.”

***

In 1859, Charles Darwin first described his theory of evolution through natural selection in The Origin of Species. Here we are, 157 years later, and although it has become an established fact in the field of biology, its beauty is still not that well understood among the populace. I think that’s because it’s slightly counter-intuitive. Unlike string theory or quantum mechanics, the theory of evolution through natural selection is pretty easily obtainable by most.

So, is there a way we can help ourselves understand the theory in an intuitive way, so we can better go on applying it to other domains? I think so, and it comes from an interesting little volume released in 1995 by the biologist Richard Dawkins called River Out of Eden. But first, let’s briefly head back to the Origin of Species, so we’re clear on what we’re trying to understand.

***

In the fourth chapter of the book, entitled “Natural Selection,” Darwin describes a somewhat cold and mechanistic process for the development of species: If species had heritable traits and variation within their population, they would survive in different numbers, and those most adapted to survival would thrive and pass on those traits to successive generations. Eventually, new species would arise, slowly, as enough variation and differential reproduction acted on the population to create a de facto branch in the family tree.

Here’s the original description.

Let it be borne in mind how infinitely complex and close-fitting are the mutual relations of all organic beings to each other and to their physical conditions of life. Can it, then, be thought improbable, seeing that variations useful to man have undoubtedly occurred, that other variations useful in some way to each being in the great and complex battle of life, should sometimes occur in the course of thousands of generations? If such do occur, can we doubt (remembering that many more individuals are born than can possibly survive) that individuals having any advantage, however slight, over others, would have the best chance of surviving and of procreating their kind? On the other hand, we may feel sure that any variation in the least degree injurious would be rigidly destroyed. This preservation of favourable variations and the rejection of injurious variations, I call Natural Selection.

[…]

In such case, every slight modification, which in the course of ages chanced to arise, and which in any way favored the individuals of any species, by better adapting them to their altered conditions, would tend to be preserved; and natural selection would thus have free scope for the work of improvement.

[…]

It may be said that natural selection is daily and hourly scrutinizing, throughout the world, every variation, even the slightest; rejection that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. 

The beauty of the theory is in its simplicity. The mechanism of evolution is, at root, a simple one. An unguided one. Better descendants outperform lesser ones in a competitive world and are more successful at replicating. Traits that improve the survival of their holder in its current environment tend to be preserved and amplified over time. This is hard to see in real-time, although some examples are helpful in understanding the concept, e.g. antibiotic resistance.

Darwin’s idea didn’t take as quickly as we might like to think. In The Reluctant Mr. Darwin, David Quammen talks about the period after the release of the groundbreaking work, in which the world had trouble coming to grips with Darwin’s theory. It was not the case, as it might seem today, that the world simply threw up its hands and accepted Darwin as a genius. This is a lesson in and of itself. It was quite the contrary:

By the 1890s, natural selection as Darwin had defined it–that is, differential reproductive success resulting from small, undirected variations and serving as the chief mechanism of adaption and divergence–was considered by many evolutionary biologists to have been a wrong guess.

It wasn’t until Gregor Mendel’s peas showed how heritability worked that Darwin’s ideas were truly vindicated against his rivals’. So if we have trouble coming to terms with evolution by natural selection in the modern age, we’re not alone: So did Darwin’s peers.

***

What’s this all got to do with chain letters? Well, in Dawkins’ River Out of Eden, he provides an analogy for the process of evolution through natural selection that is quite intuitive, and helpful in understanding the simple power of the idea. How would a certain type of chain letter come to dominate the population of all chain letters? It would work the same way.

A simple example is the so-called chain letter. You receive in the mail a postcard on which is written: “Make six copies of this card and send them to six friends within a week. If you do not do this, a spell will be cast upon you and you will die in horrible agony within a month.” If you are sensible you will throw it away. But a good percentage of people are not sensible; they are vaguely intrigued, or intimidated by the threat, and send six copies of it to other people. Of these six, perhaps two will be persuaded to send it on to six other people. If, on average, 1/3 of the people who receive the card obey the instructions written on it, the number of cards in circulation will double every week. In theory, this means that the number of cards in circulation after one year will be 2 to the power of 52, or about four thousand trillion. Enough post cards to smother every man, woman, and child in the world.

Exponential growth, if not checked by the lack of resources, always leads to startlingly large-scale results in a surprisingly short time. In practice, resources are limited and other factors, too, serve to limit exponential growth. In our hypothetical example, individuals will probably start to balk when the same chain letter comes around to them for the second time. In the competition for resources, variants of the same replicator may arise that happen to be more efficient at getting themselves duplicated. These more efficient replicators will tend to displace their less efficient rivals. It is important to understand that none of these replicating entities is consciously interested in getting itself duplicated. But it will just happen that the world becomes filled with replicators that are more efficient.

In the case of the chain letter, being efficient may consist in accumulating a better collection of words on the paper. Instead of the somewhat implausible statement that “if you don’t obey the words on the card you will die in horrible agony within a month,” the message might change to “Please, I beg of you, to save your soul and mine, don’t take the risk: if you have the slightest doubt, obey the instructions and send the letter to six more people.”

Such “mutations” happen again and again, and the result will eventually be a heterogenous population of messages all in circulation, all descended from the same original ancestor but differing in detailed wording and in the strength and nature of the blandishments they employ. The variants that are more successful will increase in frequency at the expense of less successful rivals. Success is simply synonymous with frequency in circulation. 

The chain letter contains all of the elements of biological natural selection except one: Someone had to write the first chain letter. The first replicating biological entity, on the other hand, seems to have sprung up from an early chemical brew.

Consider this analogy an intermediate mental “step” towards the final goal. Because we know and appreciate the power of reasoning by analogy and metaphor, we can deduce that finding an appropriate analogy is one of the best ways to pound an idea into your head–assuming it is a correct idea that should be pounded in.

And because evolution through natural selection is one of the more powerful ideas a human being has ever had, it seems worth our time to pound this one in for good and start applying it elsewhere if possible. (For example, in his talk, A Lesson on Worldly Wisdom, Munger uncovers how business evolves in a manner such that competitive results are frequently similar to biological outcomes.)

Read Dawkins’ book in full for a deeper look at his views on replication and natural selection. It’s shorter than some of his other works but worth the time.

The post What Can Chain Letters Teach us about Natural Selection? appeared first on Farnam Street.

]]>
24173