Info

Technology has been a major driver of all revolutionary change that occurs on a large scale. As we leave the Information Age and head into the next era, it is prudent to examine the extraordinary technologies that are emerging from the scientific laboratory, and to attempt to understand the social and ethical impact that these technologies would have on healthcare, society, and our species.

3.1 Introduction

In 1980, Alvin Toffler [1] described the three ages of man: the Agriculture Age, the Industrial Age, and the "new" Information Age. The purpose was to call to public attention and scrutiny what was perceived to be a new, major change in our society as a whole. This change, the Information Age, was described as a new revolution that was predicated upon a new technology that would totally change the entire fabric of daily living. The premise was that entire societies of the previous ages, the Agriculture Age and the Industrial Age, were based on a specific technology—farming, and then machines. The Information Age is based on telecommunications and computers—the dispersal of information. Agriculture societies were intent on providing survival for themselves, their families, or immediate community. With the Industrial Age, a few people were able to provide the food and material needs of thousands or millions of people by way of efficient machine technology, be it tractors and harvesters for food or mass production of clothing, transportation, and other devices. In the Information Age, rather than working directly in various goods, there was a major switch to a service industry in which people did not grow, build, or make something, but rather were intermediaries that provided other peoples' (or manufactured) products or performed service for them.

The Information Age is actually over 100 years old, including technologies such as the radio, television, telephones, computers, and the Internet, although Toffler chooses to date it as beginning after World War II. During this Information Age, the majority of people did not farm their own food nor make any products nor even perform manual labor for someone else, but rather dealt in the intangible information. The focus is on making information available in a timely (usually rapid) manner in order to make better decisions, to acquire a market position, archive massive amounts of information for reference, to connect people with other people or information in a ubiquitous manner, etc. There is nothing substantive, no ear of corn, new clay pot, or even a hole in the ground. Instead, there are ideas, datasets, and networks. Moreover, a chat room is not a room but a computer program to share messages and conversation—in essence, a virtual room. The results have been to make a larger amount of information available distributed throughout the world, so fewer people could make even vaster amounts of food and products available, freeing the majority of the population to either concentrate on producing more information (away from manual labor) or more leisure time. In addition, many inanimate objects, such as telephones, automobiles, computers, and even robots started the earliest, primitive level of "intelligence". These machines and devices could do very simple tasks that people used to do. Cell phones store phone numbers, automobiles have automatic adjusting breaks, and televisions and videocassette recorders are programmable and have remote controls. Throughout these three ages or revolutions, humans have remained unchanged: What humans do is changed but, with the exception of a significant reduction in disease (and a resultant slight increase in living longer lives), humans are exactly physically the same as we have been in the past hundreds of thousands of years.

More important for understanding the various ages and the transitions, there comes a time when a revolution (such as the Industrial Age) goes from revolution to evolution. Figure 3.1 is a conceptual graphic of the ages. What is noticed is that there is a "tail" at

* The opinions or assertions contained herein are the private views of the authors and are not to be construed as official, or as reflecting the views of the Department of the Army, Department of the Navy, the Advanced Research Projects Agency, or the Department of Defense.

Fig. 3.1 The ages of the development of technology

the beginning of the revolution, which shows a small amount of change in the new technology: discovery. A point is reached when there is a very rapid growth of the new technology: commercialization. The "revolution" is now taken by society as a whole, for example Henry Ford making the automobile available to everyone: consumer acceptance. Any subsequent changes in the technology are evolutionary rather than revolutionary. Once a revolution has achieved consumer acceptance, any subsequent changes that are made are iterative, making the product better but not inventing a new product. The rapid growth in technology flattens out and no significant new technology is invented. It appears that this plateau effect has been achieved with Information Age technologies, as manifest by the ubiquitous use of cell phones, computers, the Internet, etc. There has not been a new invention in information technology in over a decade; the researchers are simply making the things we have better or cleverer. If there are no "new and revolutionary technologies" being created by Information Age technologies, then in what direction and with what technology will the next revolution occur?

There appears to be another new age occurring. Because this is the very middle of change, it is hard to perceive the trends and interpret the essence of the change around us, and it is not possible to prove that a change is occurring, so the following speculation is offered. In looking at Fig. 3.1, there is a new tail represented that has not reached the "Consumer Acceptance Line". This trend is rooted in the discoveries in biology over the past 30 to 50 years, and not only in the discovery of DNA and the Human Genome, but also in the many pharmaceuticals and consumer products based on biological principles. In addition, the primordial efforts during the Information Age at making devices intelligent are now expanding exponentially. The Information Age bar codes have made all products identifiable all the time, linked to many important functions for stores, and so on. Credit cards provide access at all times to all things that can be purchased, either directly or via the Internet. But new microtechnologies such as the radio frequency identification (RFID) tags are complete computers that are so tiny (smaller than the head of a pin) that virtually everything from food to clothes to appliances will have a tiny bit of intelligence embedded inside and will be able to communicate with one another. The result is a world in which even inanimate objects are "smarter", and they "talk" with one another. Perhaps this could be considered the first step toward a new life form, one capable of communicating by itself but not "living" in the same sense as do people. But most importantly, this revolution is being led not by individual brilliant researchers discovering something in their tiny niche, but rather by large, interdisciplinary teams that have expertise in many areas, with a heavy emphasis on biologic sciences. The discovery and understanding of the complexity of the world has progressed to the point where no single person can understand the truly large issues, and any fundamentally revolutionary change can only be achieved by interdisciplinary teams. The term "BioIntelligence Age" [2] has been proposed as a placeholder name for this new direction, because it illustrates the combination of the importance of the discoveries in biology, physical sciences, and information sciences (Fig. 3.2). Discoveries are occurring at the interface of two or more of the technologies, creating something that a single discipline could not develop alone.

On this broad background, it is appropriate to investigate how one portion of this change in science and technology—healthcare—is accommodating to the future. Although many of the technologies that will affect the future are being discovered in the basic sciences, their ultimate use will be for health care purposes, or require implementation by a health care provider. The technologies to be addressed have been chosen because of the profound questions they raise for individ-

Fig. 3.2 The BioIntelligence Age, and ages of interdisciplinary research

ligent, are they "alive" and must they be given "rights?" Will they even remember that we created them, or even care?

3.3 Human Cloning

There exist numerous human clones in many different countries, with publications about them coming from China, Korea, and Italy [4]. The United States and most of the world community has banned human cloning. Was that a prudent move or just a knee-jerk reaction? With an ever-escalating world population and millions of starving people, why is it necessary to clone a human? Although there has not been a formal conclusion on how to address the issue of human cloning, it is banned in most countries. Was that a correct decision, or should a family that has tried all known forms of medical reproduction and failed be given a chance to have their own child through cloning? Is cloning one more step in the "natural" evolution of humans?

uals, society, and the species as a whole. While many have been considered as being in the realm of science fiction, recent discoveries have been subjected to the rapid acceleration of technology and therefore will appear much earlier than anticipated: Science fiction will soon become scientific fact! These new discoveries will launch the moral and ethical challenges that today's students and residents must solve during their careers.

3.2 Intelligent Computers and Robots

The human brain has been estimated to compute at the speed of 4 x 1019 computations per second (cps) [3]. The latest supercomputer, Red Storm at Los Alamos National Laboratories (Los Alamos, N.M), computes at 4 x loi5 cps, still about 1,000 times slower than the human brain computes. However, Moore's Law (roughly interpreted as "computer power doubles every 18 months") would indicate that computers will be as fast (or faster) than humans are in 15-20 years. New programming techniques, such as genetic algorithms, cellular automata, neural networks, etc., are designed to "learn", The result will be computers, machines, and robots with greater computing power than humans, and that will have the ability to learn from experience, to adapt to new or novel situations, and design a solution to new situations. Will they be intelligent? Will humans "communicate" with them? If they are intel-

3.4 Genetic Engineering

The first genetically engineered child was born in 2003 to a family with three boys. The parents decided to "engineer" their fourth child to be a girl; this and many other examples are discussed by Gregory Stock in his book Redesigning Humans: Our Inevitable Genetic Future [5]. Not only is it possible to choose through engineering specific favorable human traits, but also genetic sequences for a number of diseases have been studied, and there are children who have had the disease trait engineered out in order to have a normal, happy life. Other parents have chosen to use genetic engineering for a second child (the "survivor sibling") when the firstborn child develops an incurable disease (like leukemia) [6]. The newborn child's normal hemopoetic stem cells can provide a rejection-free replenishment for the firstborn who has had total irradiation of bone marrow to cure the leukemia. Is it moral to specifically engineer and conceive one child in order to save another?

Another aspect of genetic engineering is that the genetic sequences for specific traits in one species (e.g., genes that allow reptiles and hummingbirds to see in the dark with infrared or ultraviolet vision [7, 8]) are well characterized and have been successfully transplanted across species. Should humans be engineering their children, not only with traits to make them better or stronger humans, but also with traits that go beyond known human capabilities such as the infrared vision and others, especially if the new trait provides an important new advantage? How will it be decided who can receive such genetic traits that give a person a superior advantage?

3.5 Longevity

The longest recorded human lifespan is 123 years. One of the major determinants in longevity is the telomere on a chromosome—when a cell divides, the telomere is shortened by the enzyme telomerase, eventually resulting in a telomere that is too short to sustain further division and hence, the cell dies. There are a few strains of mice (and "immortal" cell lines) that have been engineered to produce antitelomerase, which blocks the enzyme telomerase and maintains the length of the telomere; these mice are able to live two to three times a normal life span [9]. If this mechanism is also effective in humans, should we do human trials to determine if a person can live 200 years, or longer? If longevity is successful, what are the social implications of living 200 years? Does the person retire at 60 years of age, with 140 years of retirement? How would it be possible for the planet to support the massive increase in population if people could live so long?

3.6 Human-Machine Communication

A number of centers around the world now have implanted probes into monkeys' brains and read the signals when a monkey moves its arms to feed itself [10]. By training the monkey to eat, and then decoding these signals, it has been possible to send the signals for eating directly to a robotic arm. In a short time, the monkey is able to feed itself with the robotic arm simply by thinking of feeding itself. Where can this technology lead? To putting probes in the brain to directly connect to a computer or the Internet? As the control of artificial limbs and other parts of the body becomes more successful, should these limb prostheses be used to replace those of paraplegics or quadriplegics? Will such persons be true cyborgs (half human-half machine)?

3.7 Artificial Organs and Prostheses

The following example typifies the interdisciplinary approach needed to achieve success in designing and creating complex living systems, such as growing artificial organs to replace diseased organs. The following illustration approximates the system pioneered by Dr. Joseph Vacanti [11] of Massachusetts General Hospital (MGH) and Massachusetts Institute of Technology (MIT), and is described in order to understand the critical need for an interdisciplinary approach in research and healthcare. Using computational mathematics, a complete microvascular system with an artery and vein that anastomoses at a 10-|m size (red blood cells are 8 |m in size) is computer designed. This design is exported to a stereolitohgraphy machine (a 3-D printer) that "prints" this blood vessel system, using a bioresorbable polymer designed by chemists in which angiogenesis factor, platelet-derived growth factor, and other cell growth promoters from molecular biologists are embedded. This artificial scaffold is then suspended in a bioreactor (a bath with fluid that supports cell growth), to which is added vascular endo-thelial stem cells. The stem cells grow and absorb the scaffold, leaving a living microvascular system; this is placed into another bioreactor with hepatic stem cells, and a miniature liver is grown while the blood vessels are perfused. The result is a tiny portion of a synthetically grown liver, which is able to support growth and can produce the products (albumen, globulin, etc.) a natural liver would produce. The challenge for the future is to test if this will survive when implanted in an animal, and whether it will be able to scale up to a full human-size liver grown from a person's own stem cells. There are a number of alternate approaches as well. Using a printed matrix of substrate to attract promote natural stem cell growth; a stereolithograph printer that can print a number of different cell types simultaneously to print an entire organ; and transgenic pigs that can grow an organ that is not rejected by a human, as well as other innovations are being investigated. With such a large amount of research from many different approaches, it is highly likely that in the near future synthetically grown organs will be available on demand.

Another successful technique for designing replacement parts for humans is that of intelligent prostheses. While current orthopedic prostheses, such as hips and other joints, have been successful for decades, re-replacement is often needed because of wear from mechanical stress and strain, fracturing, etc. New research includes microsensors and actuators into prostheses, which can then respond to the stresses and adjust the prosthesis to take the strain off the bone and provide for a more stable and longer-lived prosthesis, or im-plantable micropumps that can sense blood sugar levels and release insulin to control diabetes [12]. As development continues in synthetic (living) or prosthetic replacement parts for humans, it may be possible to replace most of the human body with synthetics (cyborg). Will there be a threshold reached when a person is more than 90% synthetic replacements, and if so, will that person still be "human?" What exactly is it

Sector Fig- 3.3 The rate of change in different sectors in response to disruptive technology

Technology

0 0

Post a comment