Aug 16

Oceans as a memory of evolution

I have encountered with a fascinating article! It is about tracking the evolution of viruses and how they make the other species sick by observing flora and chemical composition of oceans! Here is the most influential parts for me:

“The chemistry of the ocean also carries memory of the atmosphere—of the carbon dioxide (and other gases) its waters have exchanged with the air, which can be stored at depth in the solid form of calcium carbonate for tens to hundreds of years, ready to be redissolved with the right priming or triggering event (anthropogenic inputs), in the process helping to regulate whether the ocean remains at a neutral pH or becomes acidic (which can harm many forms of life).”

“But in many cases, the viral relationship to the host is simply unknown. Some 44 percent of human genes are transposable elements—“jumping genes” that can change position in a genome. A remarkable one fifth of those genes, or 8 percent of the human genome, are derived from retroviruses. By far, most of these grafted-on pieces of us are of unknown function for us.”

“All organisms employ a suite of tools to communicate at a molecular level, with chemistry (as in quorum-sensing) serving to accomplish this metabolic “diplomacy” across habitat boundaries. Usually an endless stream of communications provides a general sense of equilibrium, but imbalances or irregularities in the network can emerge, demanding “correction.”

“As the novel coronavirus becomes more familiar to the human body, it offers a memory of the deep connection that human evolution—all life—has to early ocean history. Might a shared focus on the surfacing of entrenched memories of human evolution, systemic racism and the trauma in individual lives lost galvanize humanity towards collective change that is mindful of our history and ecology?”

 

See the original source: The Ocean Carries ‘Memories’ of SARS-CoV-2

 

Jun 20

Against ‘Mainstream’ Feminism

I’ve encountered with an illuminating video that is worth to think about it for a while. Angela Davis is positioning herself against mainstream feminism or Bourgeois feminism. She thinks the mainstream feminism is being a part of the patriarchal hierarchy instead of reaching out the lower levels of hierarchy.  For example, the glass ceiling effect can be discussed for highly educated and, generally, white women who are willing to occupy highest positions in a society whereas women of color, indigenous women, trans women are still suffering from inequality.  See the video to get inspired:

May 06

Future of Conservation: Instrumental Value of Artifactualness

Abstract

With concerns of the current climate change, I feel the urge to reevaluate our conservation attempts to act fast and reasonable while leaving no one behind. The definitions and attributed roles of nature and being natural have been formed in a way that saving modified organisms might become idle. I propose a conservation assessment of species based on their instrumental value without excluding genetically, physiologically, or naturally modified organisms. I redefined being artifactual as being shaped by humans directly or indirectly. Indirect effects are attributed to the recent human activities boosting global climate change. As a result, almost all species are sharing one more common thing in my view: being artifactual. Conservation strategies can be modified by evaluating species for their functionality in an ecosystem via their instrumental values regardless of being natural.

Prologue

In our current world that we messed up with CO2 levels and triggered human-made modifications of species’ internal balance, we, ‘homo sapiens’, have a great responsibility towards both the environment we live in and species surrounding us. On the other hand, we found ourselves in a very strange situation: the destroyer and the saver. We are now trying to save our future lives that we destroyed before. Our species, ‘humans’, is positioning itself somewhat higher to all other species in nature with presumed cognitive abilities, yet a certain species, ‘us’, caused the deterioration of the natural balance starting from modifying N2, O2 cycles to exploiting the population size of another species. Even until now in this short paragraph, I called ourselves as ‘homo sapiens’, ‘humans’, and ‘us’. The challenge here is not only syntactic but semantic. Are we a gang, ‘us’? Are we abstracting ourselves, ‘humans’? Or are we just another species, ‘homo sapiens’? How to place ourselves might become a key especially in environmental issues in a sense that can give us some courage to take responsibilities for what we have done so far and a hope to make it better next times. That is why, I will call us as ‘we’ throughout this paper because we are together in all these.

 

Introduction

In the era of climate change, we have jumped to the stage of acceptance after long years of denial. We all know that these two words have been used by politicians, policy makers, and scientists in many different concepts. Most of us totally missed the significance of life-saving actions such as simply decreasing the usage of fossil fuels. It is urgent that we need to act now to reduce the CO2 levels critically in air and oceans. Clean air is stated as a basic human right although sufficient actions have never been taken since 1970s. Why? Because we could not force our governments to act accordingly even though rules and regulations were written precisely. The United Nations High Commissioner for Human Rights said in a conference that “there can be no doubt that all human beings are entitled to breathe clean air”.1 However, I see a problem here. While stating the importance of a clean environment for humans, this concept is not emphasizing enough to be mindful about nature to live and let other species live. Since we have the power to cause mass extinctions, modify existing species and create novel species, we need to value species carefully.2 What sorts of value that species have and how we react to conserve those species are two branches of ethical concerns. I do believe both are required to be redefined considering anthropogenic climate change and its effects on the future of the entire world.

 

Species that are sharing the most affected areas can be prioritized against others but drawing a line between those assigned to be more important than others will be another issue. Some claim that the prioritization of places can be achieved on the basis of their biodiversity value although others define the concept of biodiversity nonsense.3-5 At least, all agree to disagree: we need a comparative measure to decide on conservation while the definition of the most vulnerable changes time to time as the definition of species changes due to gray regions in species boundaries. Species richness and abundance have been offered as a measurement criterion; however, many showed that just counting is not yielding accurate protecting strategies whatsoever.5 Another attempt in species conservation practices accounts for a qualitative approach considering the value of species. There is no doubt that species are valuable while types of values they have might differ. Nevertheless, these values make them worth to protect. Even before arranging policies for conservation, it is imperative that we need to come up with clear understanding of values.

 

The value of species can transform from context to context in a way that being potentially useful in future and having a long history on earth might become two conflicting candidates in determining.  There is almost no specific measure to compare all species. Besides, as a decision maker, we have very subjective reasoning most of the time. While we can decide to preserve a colorful flower, we can let a critical ant colony die. To solve the problem of weighing species, two categories of values are discussed in general: having an instrumental value or a final value. Instrumental values imply that some species can be means to regulate an ecosystem so that we need to save them for present or future usefulness. Instrumental value is somewhat local in a specific area. Oxygen levels are supported by trees but their leaves, height, type of roots and so on vary abruptly in different climates, for example. Global climate change creates contradictions against local attempts such that we cannot recommend a specific species to sustain oxygen levels. However, accepting the fact that we cannot yield a preservation law that is valid everywhere, would be the first attempt in conservation biology. Furthermore, I do believe conservation biology should only rely on instrumental value for three reasons. Firstly, zeitgeist, the spirit of time after climate change, is dictating that. Secondly, final value and its subcategories (subjective and objective ones) do not offer a direct path to conservation. Thirdly, instrumental value of species can steal some time for the future of earth by saving artifactual species and redefining artifactualness. I will first focus on former ones briefly and then elaborate on the last one more deeply.

Conservation biology should only rely on instrumental value

Zeitgeist the dictator. I will not spend my time and space to discuss whether climate change is real or not. It is very real and happening right at the moment. Atmosphere and surface temperatures, ocean’s acidity and sea levels are rising unconventionally. This means there will be endangered species living in oceans, underground, and on air because their natural environment is changing so rapidly that they might fail to adapt that. These global changes are stimulating many irregularities as a domino effect. For example, sea turtles’ sex ratio has been affected due to abnormal temperature changes on coastal sands. They bury their eggs under sands and the sex of juveniles depends on the surrounding temperature. Researchers observed that turtles ended up with a female dominant population due to the rising temperature. The story has not done yet. Some concerned biological conservatists decided to regulate the sex ratio by putting turtle eggs in an incubator at a predefined temperature. Interestingly, they accidentally set the incubator to a lower temperature than as usual and got a male dominant population in which fertility ratios and the resulting number of next generations decreased significantly. The traumatic part was that they noticed this mistake 20 years later!6 There is a circular path where humans appear in multiple stations. We caused the global climate change. We realized that and wanted to fix it. We broke some parts while we are trying to fix it. Again, we caused the global climate change.

 

The dramatic example outlined above points out that our solution strategies can be a game changer in both ways. Conservation attempts are in a hurry after realizing apparent effects of climate change and finding more evidence and supporters than before. To be honest, we have to be fast to protect as much as we can.

We can think of we are boarding on Noah’s Ark except for having earth itself instead of a ship. There will be limited space on deck. This is analogous to having limited world resources. Selection and preservation criteria should be set before boarding and I offer to use instrumental value of species to act fast and fair.

 

Being an instrument might sound as the objectification of creatures to be utilized as tools. We need to be clear on that we are not the users of species, but nature is. Kenneth Goodpaster discussed this aspect of values developed in a cultural context. Our egoistic intentions might drive us to modify environment as we wish so that the imposed values can have no relationship to the natural functioning world.7 On the other hand, the determination criteria, here described as instrumental value of species, does not make us heroes who are so altruistic that they can sacrifice their lives for others. Once we can turn the wheels of the mechanisms of conservation based on instrumental values in a mechanistic way, no other species including humans can intervene with the process supposedly. The concept of mechanisms may seem void until they are defined in a proper context.  What I want to emphasize is the importance of having a recipe or a roadmap in the preservation of species. The mechanisms of conservations could be the attempts evaluating species concentrated on their functionality in an ecosystem via their instrumental values. As I stated at the beginning of this paragraph, all species are tools for nature that play their roles to keep earth functioning for every species. Conservation strategies can be built upon this functionality criterion.

 

In my perspective described above, nature might be identified as a ruler unconsciously.  Instead, I would like to think nature as a mediator between subjects of conservation attempts and subjects of saved species by preparing habitat requirements and imposing boundary conditions. However, we caused substantial changes in nature. Our mediator is reshaping itself as a result. There are two sides will take action in response to altering nature: humans as actors of conservation practices and species in affected area as victims of changes. Natural evolution and continuation of species have been interrupted or distorted during the current climate change. It has even become harder to describe which process is natural. Besides, naturally or unnaturally modified organisms are taking larger place than before due to advancements in genetic engineering. I believe utilizing instrumental value of species can offer a solution to the conservation of modified nature and organisms, but before elaborating this idea, I will discuss the second major type of values; final value.

Final value is problematic. Final value is the value for being what it is.2 Human beings are considered to have final value since they are not used by others as keys. Although we are not utilized as tools by an orchestrator species, we still hold subjective and objective values for being unique in sorts of circumstances and being a part of history, respectively. Some think that every species contains inherently an objective value because they do have their own welfare.8 Still, that does not very helpful when we come to realize that we need to choose some species for conservation by ranking them in one respect. Objective values have been discussed more with the efforts of Rolston who correlated environmental values with ecological definitions.7 Specifically, Rolston claimed that:

“As we progress from descriptions of fauna and flora, of cycles and pyramids, of stability and dynamism, on to intricacy, planetary opulence and interdependence, to unity and harmony with oppositions in counterpoint and synthesis, arriving at length at beauty and goodness, it is difficult to say where the natural facts leave off and where the natural values appear.”9

Rolston has been criticized in equating facts with values in a manner that values in the domain of subjectivity whereas facts are mostly related to empirical evidence.2 He also took one step forward in describing objective values of species such that each species is a part of evolutionary progress in time with a natural historical value.2,8 Furthermore, killing or not preserving a species would mean an intervention to future possibilities. Here, he is mixing past and future natural values of species. Although he has a point to appreciate all species a bit romantically, his suggestions are not practical at all in terms of conservation strategies. The value and ethics of species may pave the way for evaluating subjects in different, and maybe, fruitful perspectives; however, we still need some measures for conservation in endangered areas. For example, plastic-eating bacteria can be prioritized against lovely fluffy cats in an urban region. They both have an objective value and cats might have higher subjective value aesthetically. In this case, we can make our decision based on purely their instrumental value since piled up plastics can be absorbed in soil and poison drinking water, which is vital for almost every species in an area.

Artifactualness at its finest. Any organism designed and engineered partially or fully by humans is artifactual. Genetically modified organisms, synthetic species, transgenics, cells edited with CRISPR, prosthetic organs, protocells are in the realm of artifactualness.2 With the improvements in genetic engineering, many researchers have spent their effort to modify or create species. There might be various reasons that are both practical and scientific.  For instance, a study conducted on Drosophila melanogaster showed that controlled synthetic organisms (Drosophila synthetica) can be created to avoid the hybridization of genetically modified animals with wild type population. Generating artificial species boundaries might enable us to set safety mechanisms for ‘natural’ species while attracting public attention with the potential to satisfy future medical and nutritional needs. Moreover, designing artificial species barriers could shed light on natural speciation mechanisms.10

 

Before discussing the value of being artifactual, I would like to stress on evolutional perspective of genetic modifications by demounting the above example.  The scientific inquiry here that is motivated by the past natural speciation events does have a value for continuation of that specific species, e.g. Drosophila melanogaster.

Once we reveal the driving forces of evolution, we can predict the future directions of the species and eventually modify it if necessary. Shaping species as required is not intrinsically considered as being natural but genetic attempts are on the side of species such that in an alarming situation like extinction, genetic alterations may generate alternative paths for conservation.

Apparently, there is still a sort of value that connects ‘natural’ species with the modified versions of them. Some ethicists, environmentalists, and biologists support that being natural and a part of history are essential in value tagging. Modified organisms are far from being natural in varying extends according to this view. One of the proponents of this idea, Preston, claimed that:

“As the effects of human activities on the biosphere become more widespread, the 3.598 billion years of evolutionary history before the creation of the first artefact becomes a better and better referent for the term ‘the natural’.”11

He provided a correlation between human-made effects on the natural state of environment and long evolutionary history. However, this way of describing the natural is not separating inorganic from organic or non-living things from living organisms. On the other hand, having billion years of evolutionary history does not exclude the value of modified organisms either. The presumed value of artifactualness would be instrumental and I will go into detail in the following section.

Instrumental value of artifactual species should be appreciated in the course of climate change

After the industrial revolution, CO2 levels have reached a point where the last time earth has experienced a similar rise was about 3 million years ago.12 Before our steam engines, large factories and cars, the natural state of earth set the stage for evolution of millions of species. The last 100 years changed the scene entirely. Some species have been expelled from the stage while their cousins have become famous.  California grizzly bear is now extinct whereas panda population has been restored. We expanded our territories leaving smaller and smaller areas for other species to live. Food stocks, water resources and air quality have been damaged. We are trying to restore endangered areas and species but also, we are not very clear on conservation strategies. To maintain balance among species within ecosystems, we need to understand the modifications on nature and to answer the question that why we saved pandas. Our incentives would be psychological or egoistic such that we have pleasure of fixing what we broke before. We might seek for gratitude of society in saving. We might have our own benefits from saving particular species. Apart from all these, my stand is based upon values of species for conservation practices.

 

If I summarize a point I made before on considering nature as a mediator, I would like to amplify the idea that a modified mediator would pave the way for modified species. Previously, I shared the opinions of concerned ethicists about being natural as a condition to be saved. My position against this claim is to assert that it is not fair anymore to take a modified nature as natural. Especially during the last decades, ecological systems are different than how they developed before. Ecosystems supporting the evolution and prolongation of species have been changed thanks to our technological attempts. Accordingly, species have experienced sorts of selection and modification in their altered environments. These modifications have been integrated with species lives physiologically or genetically. Eventually, modified species have already become a part of that modified nature. One of the most obvious examples is the evolution of the peppered moth after the Industrial Revolution in the UK since the dark color was beneficial in a polluted environment.

 

So, being natural is not a necessary condition to be preserved. It is also not right to classify species being natural or unnatural in changing circumstances. This points out a need to redefine artifactual species. This effort is important because conservation attempts must include the evaluation of all species regardless of being natural, unnatural, or supernatural.

I propose to define artifactualness as a state that results from changes exerted by humans directly or indirectly. Direct effects include laboratory manipulations, genetic engineering, and prosthetic additions. Indirect effects are the results of global climate change.

I put emphasis on indirect ones because of the urgency caused by rapid changes in environment. The endangered species require special conservation steps locally and their significance can be considered on the basis of their instrumental value. Genetically, physiologically, or naturally modified organisms can be handled under the new definition of artifactualness. There is almost no other species left on earth that we did not touch its living space. For the future of conservation, any functional species should be included to our agenda.

Conclusion

The motivation to write this article was to ignite a spark about future directions of conservation strategies. I briefly described a method of conservation that evaluates species based on their functionality in an ecosystem via their instrumental values. This approach does not impose that we are the users of tools in nature. Instead, nature itself has turned into a mediator between us and endangered species. However, the abrupt anthropogenic changes in nature caused a problem of assessing ‘natural’. Opposing the ideas requiring being natural, I prefer to define artifactualness as a state that results from changes exerted by humans directly or indirectly. My approach is built upon instrumental value of species as a basis and constructed with redefined artifactual species in the zeitgeist of climate change.

 

 

 

 

References

  1. Countries have a legal obligation to ensure clean air, says UN human rights representative. https://www.ccacoalition.org/ru/node/3007. Accessed on 033020.
  2. Sandler, R.L., 2012. The ethics of species: An introduction. Cambridge University Press.
  3. Sarkar, S., 2002. Defining “biodiversity”; assessing biodiversity. The Monist, 85(1), pp.131-155.
  4. Margules, C. and Sarkar, S., 2007. Systematic conservation planning. Cambridge University Press.
  5. Santana, C., 2014. Save the planet: eliminate biodiversity. Biology & Philosophy, 29(6), pp.761-780.
  6. The Evolution of Males and Females – with Judith Mank. The Royal Institution YouTube channel. https://www.youtube.com/watch?v=En26p6GvtHw. Accessed on 042820.
  7. Scoville, J.N., 1995. Value theory and ecology in environmental ethics: a comparison of Rolston and Niebuhr. Environmental Ethics, 17(2), pp.115-133.
  8. Rolston, H., 1995. Duties to endangered species. Encyclopedia of Environmental Biology, vol. 1, pp. 517-528.
  9. Rolston, H., 1975. Is there an ecological ethic? Ethics, 85(2), pp.93-109.
  10. Moreno, E., 2012. Design and construction of “synthetic species”. PLoS One, 7(7).
  11. Preston, C.J., 2008. Synthetic biology: drawing a line in Darwin’s sand. Environmental Values, 17(1), pp.23-39.
  12. Inglis, G.N., Farnsworth, A., Lunt, D., Foster, G.L., Hollis, C.J., Pagani, M., Jardine, P.E., Pearson, P.N., Markwick, P., Galsworthy, A.M. and Raynham, L., 2015. Descent toward the Icehouse: Eocene sea surface cooling inferred from GDGT distributions. Paleoceanography, 30(7), pp.1000-1020.

 

 

May 04

Excited electrons driving a reaction have been observed for the first time

“In past molecular movies, we have been able to see how atomic nuclei move during a chemical reaction,” said Peter Weber, a chemistry professor at Brown and senior author of the report. “But the chemical bonding itself, which is a result of the redistribution of electrons, was invisible. Now the door is open to watching the chemical bonds change during reactions.”

This spectacular article has been published in Nature Communications recently. The electrons were excited with light adjusted by a laser. They stayed in the excited state about 200 femtoseconds and that was enough to capture the redistribution of electrons by researchers.

Scientists have directly seen the first step in a light-driven chemical reaction for the first time. They used an X-ray free-electron laser at SLAC to capture nearly instantaneous changes in the distribution of electrons when light hit a ring-shaped molecule called CHD. Within 30 femtoseconds, or millionths of a billionth of a second, clouds of electrons deformed into larger, more diffuse clouds corresponding to an excited electronic state. Credit: Thomas Splettstoesser/SCIstyle, Terry Anderson/SLAC National Accelerator Laboratory

The more detailed explanations can be found here: First direct look at how light excites electrons to kick off a chemical reaction

Apr 29

Seminar notes: The Evolution of Males and Females

Professor Judith Mank talked about variety of sexual evolutions and adaptations in living organisms, including humans, sea turtles, wild turkeys, gobies, clownfish and so on. The important point she made was that reproduction mechanisms can be extremely diverse for evolutionary advantages. Being larger and colorful or having many offspring can be good or bad. Moreover, sexes can be determined by environmental factors, such as temperature. As a result of the current climate change, some reptiles and sea turtles are having more male members, for example. This has totally changed the sex ratios and natural balance of some species. Now, conservation biologists are trying to preserve endangered species by keeping their eggs in incubators to simulate their fertilization conditions before the human-made climate change.

 

Here is the YouTube link: The Evolution of Males and Females – with Judith Mank

Apr 19

Spaceship drawing for fun

I am a little skeptical of my engineering drawings. Eventually, I’m giving a fine shape but I need to look for practical solutions in drawings. That’s why, I’ve explored more features in CAD. I’m using Siemens NX rather than SolidWorks, Catia, AutoCad etc. I think it is more flexible on drawing constraints. I chose my favorite item in the universe: spaceship!

The drawing is not very detailed. I’ve just spent two days. I’ve added a main body with a crew cabin. There are two cargo docks at the bottom. Plus: 2 gyroscopes, 2 big engines constituted by 4 little ones, 2 antennas, solar panels on the outside of the crew cabin.

I’m adding figures with some artistic backgrounds.

What if it lands on Mars? This background picture is actually taken by myself 🙂 It is not Mars, unfortunately. It is a microscope 50X image of a damaged Kapton surface.

Miss Pilot on command

Solar panels on top

 

*The background on the cover is from a free source.

Mar 27

Easy, improved, inspired recordings: iZotope Spire

Well… I kept improving my recording abilities; of course the technical ones. I’ve bought a carry-on microphone and recorder: iZotope Spire! It is connected to my iPad with wifi and its application is pretty easy to use. Unfortunately,  my wicked voice can be heard now :/ Anyways… I’m still using GarageBand for drums and keyboards. My old Fender is the lead for sure \m/

 

Mar 27

Development of a small-scale helical surface dielectric barrier discharge for characterizing plasma-surface interfaces

Understanding plasma-surface interactions is important in a variety of emerging research areas,
including sustainable energy, environmental remediation, medicine, and high-value
manufacturing. Plasma-based technologies in these applications utilize surface chemistry driven
by species created in the plasma or at a plasma-surface interface. Here, we develop a helical
dielectric barrier discharge (DBD) configuration to produce a small-scale plasma that can be
implemented in a diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) cell and
integrated with a commercial Fourier transform infrared (FTIR) spectrometer instrument to study
plasma interactions with inert or catalytic solid media. The design utilizes the entire surface of a
cylinder as its dielectric, enhancing the plasma contact area with a packed bed. In this study, we
characterize the electrical and visual properties of the helical DBD design in an empty reaction
cell and with added potassium bromide (KBr) powder packing material in both air and argon gas
environments at ambient conditions. The new surface DBD configuration was integrated into a
DRIFTS cell and the time evolution of water desorbing from the KBr packed bed was investigated.
Measurements show that this configuration can be operated in filamentary or glow-like mode
depending on the gas composition and the water content absorbed on KBr solid media. These
results not only set the basis for the study of plasma-surface interactions using a commercial FTIR,
but also show that controlling the gas environment and water content in a packed bed might be
useful for studying different plasma regimes that are typically not possible at atmospheric pressure.

You can reach the paper via this link: Nazli Turan et al 2020 J. Phys. D: Appl. Phys

Nazli Turan1Patrick Barboun2Pritam K. Nayak3Jason Hicks2 and David B. Go4

Accepted Manuscript online 25 March 2020 • © 2020 IOP Publishing Ltd

Photographs of air helical surface DBD at 90 kHz with an applied voltage of (a) 2.5 kV, (b) 3.5 kV, and (c) 5.5 kV.

(c) unpacked and (d) packed air-exposed KBr cases, respectively. For both experiments, the power was 0.35 W at 54 kHz.

The helical surface DBD operated in the DRIFTS cell within (a) air-exposed KBr, and within (b) dry KBr with an applied voltage of 900 V at 27 kHz.

 

Dec 26

Can an algorithm predict an unknown physical phenomenon by analyzing patterns and relations buried in clusters of data?

This article investigates whether an algorithm can provide an undiscovered physical phenomenon by detecting patterns in the region where the data collected. The pattern recognition is considered with the basis of inferential statistics, which differs from descriptive statistics, as McAllister implied. I assert that physical patterns should be correlated with mathematical expressions, which are interconnected with the physical (quantitative) laws. The known unknows, e.g. gravitons, and the unknown unknowns, e.g. fifth force of the universe, are examined in the sense of learning capabilities of an algorithm based on empirical data. I claim there is no obstacle preventing algorithms from discovering new phenomena.

 

Nazli Turan*

*Department of Aerospace and Mechanical Engineering, University of Notre Dame

 

1.Introduction. The notion of discovery has occupied many philosophers’ mind in a variety of angles. Some tried to formulate a way through a discovery, although some desperately tried to fix the meaning (e.g. the eureka moment) without considering the time as a variable. Larry Laudan pointed out two motives yielding a discovery: pragmatic aspect, which is a search for scientific advancement, innovation or invention; epistemological aspect, aiming to provide well-grounded, sound theories. He made a clear distinction between discovery and justification, attaining the latter as an epistemological problem. Although I’m aware of this distinction, I tend to accept that those are intermingled, even in his example: “Self-corrective logics of discovery involve the application of an algorithm to a complex conjunction which consists of a predecessor theory and a relevant observation. The algorithm is designed to produce a new theory which is truer than the old. Such logics were thought to be analogous to various self-corrective methods of approximation in mathematics, where an initial posit or hypothesis was successively modified so as to produce revised posits which were demonstrably closer to the true value.” (Laudan, 1980). In my understanding, every pre and post processes are a part of justification, while the outcome of an algorithm is a discovery. I will pursue his analogy of self-corrective logics and transform into literal computer algorithms to examine whether a computer algorithm (or artificial intelligence, AI, machine learning program, deep neural network) can reveal undiscovered phenomena of nature.

To decide on whether the algorithm implies a true description of the real world, I rely on the empirical adequacy principle proposed by Van Fraassen. The collected, non-clustered empirical data is the input to an algorithm which is capable of unsupervised learning to avoid user’s biases. The data and resulting conclusions will be domain-specific, meaning; the algorithms, which can only interpolate the relations and patterns buried in the data, are the main concern of this paper, although there are preliminary results for physics-informed neural networks, which have extrapolation capabilities that are not within the train data (Yang, 2019).

In my view, the algorithms of interest are scientific models acting on the structured phenomena (consisting of set of patterns and relations) and utilizing mathematical expressions accompanied with statistical nature of data. The inferential statistics (in Woodward’s sense) is emphasized after making a clear distinction between the systematic uncertainties (due to resolution or calibration of the instruments) and the precision uncertainties (due to sampling of data). McAllister’s opposition to Woodward and his concerns about patterns in empirical data are elaborated in the second section and the examples of probabilistic programming which includes an application of inferential statistics are investigated. The third section will be a discussion of learning and interpolating capabilities of algorithms. Lastly, I will point out the known unknowns (gravitons) and the unknown unknowns (the fifth force) in the scope of conservation laws to discuss the possibility of discovery by computer algorithms.

2. Physical structures have distinguished patterns. Before considering the capabilities of an algorithm, I want to dig into the relation between empirical data and physical structures. There are two questions appearing immediately. First: is the world really a patterned structure? Second: are we able to capture patterns of the physical world analyzing empirical data? These questions have broad background, but I would like to emphasize some important ones here. Specifically, the latter question is my point of interest.

Bogen and Woodward shared their ideas in an atmosphere where the unobservables are still questionable (Bogen, Woodward 1988). They eagerly distinguished data from phenomena, by stating that ‘data, which play the role of evidence for the existence of phenomena, for the most part can be straightforwardly observed. (…) Phenomena are detected through the use of data, but in most cases are not observable in any interesting sense of that term.’ Bogen explains further his views on phenomena: “What we call phenomena are processes, causal factors, effects, facts, regularities and other pieces of ontological furniture to be found in nature and in the laboratory.” (Bogen, 2011). I will assert parallel claims to their statements with some additions. The recorded and collected data is the representation of regularities in phenomena. These regularities may provide a way to identify causes and effects. Apart from these, we are in an era where we’ve come to realize that directly unobservable objects might be real, such as Higgs boson and gravitational waves. Our advancements in building high-precision detectors and in understanding of fundamental interactions paved the way for scientific forward steps. I believe the discussion of observable vs. unobservable objects is not relevant anymore. Therefore, I’m inclined to accept that data provides strong evidence of phenomena whether they are observable or not. Yet, the obtained data should be regulated, clustered, and analyzed by mathematical and statistical tools. Bogen and Woodward was ending their discussion by illuminating both optimistic and pessimistic sides of the way starting from data to phenomena:

“It is overly optimistic, and biologically unrealistic, to think that our senses and instruments are so finely attuned to nature that they must be capable of registering in a relatively transparent and noiseless way all phenomena of scientific interest, without any further need for complex techniques of experimental design and data-analysis.  It is unduly pessimistic to think we cannot reliably establish the existence of entities which we cannot perceive. In order to understand what science can achieve, it is necessary to reverse the traditional, empiricist placement of trust and of doubt.” (Bogen, Woodward 1988)

Inevitably, some philosophers started another discussion of patterns in empirical data, by questioning how to demarcate patterns that are physically significant. James W. McAllister supposed that a pattern is physically significant if it corresponds to a structure in the world, but then he thought all patterns must be regarded as physically significant (McAllister, 2010). I agree with the idea that physical patterns should differ from other patterns. Physical patterns should be correlated with mathematical expressions which are interconnected with the physical laws. Opposing the McAllister’s idea that all patterns are physically significant, I can give an example of Escher’s patterns, which are artistically designed by the Dutch graphic artist M.C. Escher. If we take the color codes or pixels of his paintings as empirical data, sure we can come up with a mathematical expression or pattern to produce his paintings, but that pattern does not correspond to a physical structure. To support my claim, I can provide another example of parameterization to express the designs Escher created. Craig S. Kaplan from Computer Graphics Lab at University of Waterloo approached to the problem with a unique question: Can we automate the discovery of grids by recognizable motifs? His team developed an algorithm that can produce Escher’s patterns (see Fig.1 adapted from the website: Escherization).

Fig. 1. Escherization problem (Kaplan, CraigS.).

If we are convinced that Escher’s patterns are not corresponding to any physical structure in the world, I would like to discuss the physical patterns in data with noise or error terms. I don’t use ‘noise’, because this term is used mostly in signal processing, so it has some limitations in the connotation. Besides, for the people outside the scientific research, ‘noise’ would be conceived as unwanted or irrelevant measurements. ‘Error’ is frequently used in data analysis, but again I prefer to avoid misunderstandings about its physical causes. I will continue with ‘uncertainty analysis’ of the measured data. There are two types of uncertainties in data: systematic uncertainties, which stem from the instrument calibration and resolution, and precision uncertainties, which are due to the repetitive sampling of the system (Dunn, 2010). To analyze these uncertainties, we assume that they have a random (normal/Gaussian) distribution. I want to give an example from my specific area of research: the lifetime of the current peaks (Fig.2). Here in the figure, there are three current peaks and I want to estimate their duration (time in x-axis) by descriptive statistics. I decide to be in a 95% confidence level assuming the normal distribution of current peaks. 12 ns, 14 ns, and 20 ns are measured respectively. I can easily find their mean and the standard deviation of the mean (up), 15.33± 4.16 ns. However, this is not correct. In the figure, you can see step-by-step increments of the current, which is due to the time resolution of the oscilloscope, 2 ns. I need to consider this systematic uncertainty (us) .

Fig. 2. The change in the plasma current in time.

Up to this point, I mostly made some mathematical manipulations to represent data in a compact way. One important assumption was the normal distribution of uncertainties which is observed frequently in nature, for example in human height, blood pressure etc. The other key attempt was to choose a confidence level. Now, it is time to discuss inferential statistics. What type of information I can deduce form empirically measured data? For example, here I observed that the lifetime of the current peaks is increasing from the first one (12 ns) to the third one (20 ns). Is there any physical reason causing this? Can I conduct analysis of variance or can I test my hypothesis that if I run the current for longer time, I will observe longer lifetimes (>20 ns)? Even though scientists would choose to use descriptive statistics largely to show their data in graphs or tables, they are -in general- aware of the causes behind the empirical data yielding some trends or patterns. These patterns are the footprints of the physical structures, in other words, phenomena.

In McAllister’s paper (2010), the term ‘noise’ is used so arbitrarily that he thought noises can add up, although we cannot add them up if the parameters are dependent on each other. He provided an example for the deviations in the length of a day (Dickey 1995). His argument was the noise is not randomly distributed and it has a component increasing linearly per century (1-2 µs per century). All the forces affecting the length of a day are illustrated in Dickey’s paper (Fig.3).

Fig. 3. The forces perturbing Earth’s rotation (Dickey, 1995).

Dickey described all these individual effects as the following:

“The principle of conservation of angular momentum requires that changes in the Earth’s rotation must be manifestations of (a) torques acting on the solid Earth and (b) changes in the mass distribution within the solid Earth, which alters its inertia tensor. (…) Changes in the inertia tensor of the solid Earth are caused not only by interfacial stresses and the gravitational attraction associated with astronomical objects and mass redistributions in the fluid regions of the Earth but also by processes that redistribute the material of the solid Earth, such as earthquakes, postglacial rebound, mantle convection, and movement of tectonic plates. Earth rotation provides a unique and truly global measure of natural and man-made changes in the atmosphere, oceans, and interior of the Earth. “(Dickey, 1995, pg.17)

 

It is beautiful that a group of researchers spent quite a time to upgrade their measurement tools and the author explains how they tried to reduce the systematic uncertainties by upgrading their tools (pg.21 in the original article). And, the entire paper is about hourly/monthly/yearly patterns affecting the length of a day and how their measurement tools (interferometers) are capable of detecting small changes. Dickey clearly states that ‘an analysis of twenty-five years of lunar laser ranging data provides a modern determination of the secular acceleration of the moon of -25.9 ± 0.5 arcsec/century2 (Dickey et al.,1994a) which is in agreement with satellite laser ranging results.’ He provided the noise term for this specific measurement and there is no clue to assume it is not randomly distributed, as McAllister claimed. Dickey captured the oscillations (patterns) changing in a time-scale of centuries in the data and supported his empirical data by physical explanations; tidal dissipation, occurring both in the atmosphere and the oceans, is the dominant source of variability. After all, one important aspect I wanted to state against McAllister’s account of describing patterns is that the empirical data can yield a causal relation which can be understood as pattern, besides the uncertainties in the sense of inferential statistics might provide information about patterns too.

Inferential statistics entails with a probabilistic approach, in general. One application for computer algorithms is Bayesian probabilistic programming which adopts hypotheses that make probabilistic predictions, such as “this pneumonia patient has a 98% chance of complete recovery” (Patricia J Riddle’s lecture notes). With this approach, algorithms combine probabilistic models with inferential statistics. MIT Probabilistic Computing Project is one of the important research laboratories in this area. They offer probabilistic search in large data, AI assessment of data quality, virtual experiments, and AI-assisted inferential statistics, “what genetic markers, if any, predict increased risk of suicide given a PTSD diagnosis and how confident can we be in the amount of increase, given uncertainty due to statistical sampling and the large number of possible alternative explanations?” (BayesDB). The models that they use in BayesDB are ‘empirical’, so they expect them to be able to interpolate physical laws in regimes of observed data, but they cannot extrapolate the relationship to new regimes where no data has been observed. They showed the data obtained from satellites interpolated to Kepler Law (Saad & Mansinghka, 2016).

3. Empirical data can be interpolated to physical laws. The empirical data sets can be used as raw input to an algorithm which can be trained to distribute data in subsections. If this algorithm can yield relations between data sets, we might be able to obtain physical laws in regimes of observed data. There are two main concerns here: the possibility to impose user’s biases in parametrization and the user’s capability to understand the pointed result. The first obstacle has been overcome by unsupervised learning programs. The latter needs to be elaborated.

Algorithms (both recursive and iterative ones) have black boxes, which promotes an accessibility problem to mid-layers of algorithms.  Emily Sullivan states ‘modelers do not fully know how the model determines the output’ (Sullivan, 2019). She continues:

“(…) if a black box computes factorials and we know nothing about what factorials are, then our understanding is quite limited. However, if we already know what factorials are, then this highest-level black box, for factorials, turns into a simple implementation black box that is compatible with understanding. This suggests that the level of the black box, which is coupled with our background knowledge of the model and the phenomenon the model bears on (…)” (Sullivan, 2019)

She has a point to state the relation between black boxes and our understanding; however, this relation does not override the possibility of discoveries. The existence of black boxes is not a limitation for an algorithm which may yield unknown physical relations. The limitation is our understanding. As once Lynch named his book, Knowing More and Understanding Less in the Age of Big Data, we’re struggling in between knowing and understanding. On the other hand, algorithms are acting on the side of ‘learning’. Although the concept of learning is worth to be discussed philosophically, I will consider it in the sense of computer (algorithm) learning. There are seven categories of algorithm learning: learning by direct implanting, learning from instruction, learning by analogy, learning from observation and discovery, learning through skill refinement, learning through artificial neural networks, learning through examples (Kandel, Langholz 1992). For the case of learning from observation and discovery, the unsupervised learner reviews the prominent properties of its environment to construct rules about what it observes. GLAUBER and BACON are two examples investigated in elsewhere (Langley, 1987). These programs bring in ‘non-trivial discoveries in the sense of descriptive generalizations’ and their capabilities to generate novelties have been questioned (Ratti, 2019). The novelties might be subtle, but they are still carrying the properties of being a novelty which was hidden to human knowledge before.

I realized that people tend to separate qualitative laws and quantitative laws, implying that algorithms are not effective for the laws of qualitative structures because human integration is necessary to evaluate novelties (Ratti, 2019). I want to redefine those by drawing the inspiration from Patricia J Riddle’s class notes on ‘Discovering Qualitative and Quantitative Laws’ such that, qualitative descriptions state generalizations about classes, whereas quantitative laws express a numeric relationship, typically stated as an equation and refer to a physical regularity. I’m hesitant to call qualitative descriptions as laws because I believe this confusion stems from the historical advancement of biology and chemistry. Here, I want to refer to Svante Arrhenius (Ph.D., M.D., LL.D., F.R.S. Nobel Laureate Director of The Nobel Institute of Physical Chemistry, pretty impressive title!) who wrote a book ‘Quantitative Laws in Biological Chemistry’ in 1915. In the first chapter of his book, he emphasized the historical methods of chemistry:

“As long as only qualitative methods are used in a branch of science, this cannot rise to a higher stage than the descriptive one. Our knowledge is then very limited, although it may be very useful. This was the position of Chemistry in the alchemistic and phlogistic time before Dalton had introduced and Berzelius carried through the atomic theory, according to which the quantitative composition of chemical compounds might be determined, and before Lavoisier had proved the quantitative constancy of mass. It must be confessed that no real chemical science in the modern sense of the word existed before quantitative measurements were introduced. Chemistry at that time consisted of a large number of descriptions of known substances and their use in the daily life, their occurrence and their preparation in accordance with the most reliable receipts, given by the foremost masters of the hermetic {i.e. occult) art.” (Arrhenius, 1915)

The point I want to make is that our limited knowledge might require qualitative approaches, for example, in biology; however, the descriptive processes can only be an important part of tagging clusters while the mathematical expressions derived from the empirical data show the quantitative laws of nature. Furthermore, our knowledge and the presence of black boxes are not obstacles for unsupervised learning algorithms which can nicely cluster and tag data sets and demonstrate relations between them. As an example, a group of researchers showed that their algorithm can ‘discover physical concepts from experimental data without being provided with additional prior knowledge’. The algorithm (SciNet) discovers the heliocentric model of the solar system — that is, it encodes the data into the angles of the two planets as seen from the Sun (Iten, Metger et.al. 2018). They explained the encoding and decoding processes by comparing human and machine learning (Fig.4).

Fig. 4. (a) Human learning, (b) machine learning (Iten, Metger, et.al. 2018).

 

In human learning, we use representations (the initial position, velocity at a point etc.) not the original data. As mentioned in the paper, the process of producing the answer (by applying a physical model to the representation) is called decoding. The algorithm mentioned here produces latent representations by compressing empirical data. It utilizes probabilistic encoder and decoder during the process. As a result, researchers were able to recover physical variables from experimental data. SciNet learns to store the total angular momentum, a conserved quantity of the system to predict the heliocentric angles of Earth and Mars. It is clear that an unsupervised algorithm can capture physical laws (ie. conservation laws) through empirical data.

4. An algorithm can reveal unknown physical phenomena by analyzing patterns. Now, I want to explore the territory of unknown. Until now, I tried to show that physical structures have patterns which can be represented by mathematical expressions accompanied with inferential statistics. The expressions are correlated to the quantitative laws which govern the relations and interactions of phenomena. The observed empirical data includes a set of patterns and uncertainties. An unsupervised algorithm can take the empirical data and proceed descriptive tagging by itself. The layered process might be inaccessible to the user, but the black boxes do not prevent the discovery of physical laws. Therefore, an algorithm can interpolate the data in the observed region to physical laws. This process is a learning process for an algorithm although it is a little different than human learning.

As a curious human being, I wonder if algorithms would generate new physical structures that are unknown to today’s people. The first question is: why those structures are unknown? I can provide two primary reasons for this: either our instruments require some improvements to catch small changes buried in signal, or our understanding is limited to attain some meanings to observations. Gravitational waves were hypothetical entities until we upgraded our interferometers to capture the waves transmitted after the collision of two massive black holes. To tackle the limitations of our understanding is more challenging; however, history embodies many paradigm shifts such as the understanding of statistical thermodynamics, general relativity or wave-particle duality. If you would go to 15th century and tell the scientists of the time; the cause of static electric (electrons) follows uncertainty principle, so you cannot observe it when you measure it, the scientists would laugh at you so badly. Therefore, I never eliminate chances to find unknowns in science. This unknown can be a new particle, force, interaction, or process. Whatever the unknown is, I will assume it is a part of physical phenomena.

The second important question is: can we trust algorithms to find unknowns? First of all, the trained algorithms -supervised or unsupervised- have capability to learn, as I described in the previous section. To the best of my knowledge, such algorithms proved themselves to obtain Kepler laws, conservation laws and Newton laws (Saad & Mansinghka, 2016, Iten, Metger, et.al. 2018). That is to say, these algorithms are discovering the natural laws which were unknown to the 15th century humanity. I support that algorithms might provide the next generation physical laws or related phenomena and I don’t see any obstacle to prevent them to discover something new.

The third question would be: are those unknowns epistemically accessible to human understanding? Although this question is somewhat related to the first one, I want to explicitly state two possible unknowns: known unknowns and unknown unknowns. The first one would be claimed to exist but never detected before, such as gravitons, the carrier of gravitational force. The second one; however, would be never predicted to exist, such as the fifth force of nature. The most prominent challenge to detect a graviton is the impossible design of the detector which will weigh as much as the mass of Jupiter (Rothman & Boughn, 2006). The second problem is the nature itself; to detect a single graviton, we need to shield 1033 neutrino events and we do not have a monochromatic graviton source. That’s why graviton is accepted as a hypothetical particle. However, we cannot rule out its existence either. The latest LIGO interferometers collected data to prove the existence of gravitational waves, and some think this data can be used to predict the properties of gravitons (Dyson, 2014). Why don’t we feed our algorithms with these data? Maybe, they can provide the relations in deep data and eventually we can complete the puzzle of the standard model. The overarching goal to give this example is to emphasize the possibility of discovering unknowns. I must accept that empirical data might not guarantee to yield a solid result (i.e. the existence of gravitons); however, algorithm can point out the possibilities buried in data in a probabilistic way such as the probable mass of gravitons in certain limits.

Alternatively, algorithms might reveal unknown unknowns as a result. Before discussing this, I want to examine one of the latest findings in experimental physics: the fifth force. Physicist claim they’ve found a new force of nature (Krasznahorkay, 2019). The excited (energized) new particle decayed and emitted light which has been detected by a double-sided silicon strip detector (DSSD) array. The mass of the new force carrier particle is predicted by the conservation of energy. “Based on the law of conservation of energy, as the energy of the light producing the two particles increases, the angle between them should decrease. Statistically speaking, at least. (…) But if this strange boson isn’t just an illusion caused by some experimental blip, the fact it interacts with neutrons hints at a force that acts nothing like the traditional four.” (Mcrae, 2019).

The existence of a fifth force was totally unknown until the last month. Although it requires further evidences from other labs, no one was able to find misinterpretation of data, as far as I know. This does not mean that the finding is undeniable, but there is an anomaly regarding the law of conservation of energy. Previously, I showed some example algorithms yielding the conservation laws. My claim is that if an algorithm is able to capture the conservation laws, then it might reveal the relations of a new phenomenon, in this case, a new force! A force requires a carrier particle, which they termed the X17 particle with mass mXc2=16.70±0.35(stat) ±0.5(sys) MeV, where the statistical and systematic uncertainties are stated clearly (Krasznahorkay, 2019). My far-fetching point would be a humble suggestion for the researchers working on this type of projects where the detector data can be analyzed by algorithms to see if they can reveal a new relation.

5. Conclusion. The motivation to write this article was to ignite a spark about future possibilities of unsupervised learning algorithms. Although the capabilities of pattern recognition and generating novelties are questionable due to the qualitative aspect of the phenomena and the human factor involving while detecting new discoveries and understanding what the algorithm provides, I’m very optimistic about probabilistic programming construing with inferential statistics. Here, I showed the empirical data has distinguishable patterns than any other patterns (for example, artistic ones). The physical pattern can be represented by mathematical relations which are the traces of the physical phenomena. There are many algorithms capturing the physical relations based on the patterns in empirical data. They can also recover the physical laws of nature. Regarding all these points I made, I claim we would be able to discover new physical phenomena in the real word through algorithms. Apart from the incapable instrumental tools and our limited understanding, I don’t think there is a hurdle embedded inherently in a running algorithm. The unknowns are unknown to human being, not to the computer algorithms.

 

References

Laudan L. (1980). Why was the Logic of Discovery Abandoned?. In: Nickles T. (eds) Scientific Discovery, Logic, and Rationality. Boston Studies in the Philosophy of Science, vol 56. Springer, Dordrecht.

Van Fraassen, B. (1980). Arguments concerning scientific realism (pp. 1064-1087).

Yang, X. I. A., Zafar, S., Wang, J. X., & Xiao, H. (2019). Predictive large-eddy-simulation wall modeling via physics-informed neural networks. Physical Review Fluids, 4(3), 034602.

Woodward, J. (2010). Data, phenomena, signal, and noise. Philosophy of Science, 77(5), 792-803.

McAllister, J. W. (2010). The ontology of patterns in empirical data. Philosophy of Science, 77(5), 804-814.

Bogen, J., & Woodward, J. (1988). Saving the phenomena. The Philosophical Review, 97(3), 303-352.

Bogen, J. (2011). ‘Saving the phenomena’ and saving the phenomena. Synthese, 182(1), 7-22.

Kaplan, Craig S. (2000). Escherization http://www.cgl.uwaterloo.ca/csk/projects/escherization/

Dunn, P. (2010). Measurement and data analysis for engineering and science (2nd ed.). Boca Raton, FL: CRC Press/Taylor & Francis.

Dickey, J. O. (1995). Earth rotation variations from hours to centuries. Highlights of Astronomy, 10, 17-44.

Mansinghka, Vikash K. (2019). BayesDB. http://probcomp.csail.mit.edu/software/bayesdb/

Sullivan, E. (2019). Understanding from machine learning models. British Journal for the Philosophy of Science.

Lynch, M. P. (2016). The Internet of us: Knowing more and understanding less in the age of big data. WW Norton & Company.

Kandel, A., & Langholz, G. (1992). Hybrid architectures for intelligent systems. CRC press.

Langley, P., Simon, H. A., Bradshaw, G. L., & Zytkow, J. M. (1987). Scientific discovery: Computational explorations of the creative processes. MIT press.

Ratti, E, (2019). What kind of novelties can machine learning possibly generate? The case of genomics. (unpublished manuscript).

Riddle, Patricia J. (2017). Discovering Qualitative and Quantitative Laws. https://www.cs.auckland.ac.nz/courses/compsci760s2c/lectures/PatL/laws.pdf

Arrhenius, S. (1915). Quantitative laws in biological chemistry (Vol. 1915). G. Bell.

Iten, R., Metger, T., Wilming, H., Del Rio, L., & Renner, R. (2018). Discovering physical concepts with neural networks. arXiv preprint arXiv:1807.10300.

Saad, F., & Mansinghka, V. (2016). Probabilistic data analysis with probabilistic programming. arXiv preprint arXiv:1608.05347.

Rothman, T., & Boughn, S. (2006). Can gravitons be detected?. Foundations of Physics, 36(12), 1801-1825.

Dyson, F. (2014). Is a graviton detectable?. In XVIIth International Congress on Mathematical Physics (pp. 670-682).

Krasznahorkay, A. J., Csatlos, M., Csige, L., Gulyas, J., Koszta, M., Szihalmi, B., … & Krasznahorkay, A. (2019). New evidence supporting the existence of the hypothetic X17 particle. arXiv preprint arXiv:1910.10459.

Mcrae, M. (2019). Physicists Claim They’ve Found Even More Evidence of a New Force of Nature. https://www.sciencealert.com/physicists-claim-a-they-ve-found-even-more-evidence-of-a-new-force-of-nature/amp

Dec 26

Values in Science

In defence of the value free ideal, Gregor Betz

  • the most important distinction:

“The methodological critique is not only ill-founded, but distracts from the crucial methodological challenge scientific policy advice faces today, namely the appropriate description and communication of knowledge gaps and uncertainty.”

The acceptance of miscommunication between scientists and policy makers is an important step to illuminate public on crucial issues.

  • a clarification question/criticism:

“Scientists are expected to answer these questions with “plain” hypotheses: Yes, dioxins are carcinogenic; or: no, they aren’t. The safety threshold lies at level X; or: it lies at level Y.”

I think those are not even hypotheses. A hypothesis should clarify a problem/challenge at certain conditions. For example, ‘If this …, then that …’ would be a candidate of a hypothesis but ‘plain’ statements are not a part of scientific communication. I feel that throughout the paper, the author emphasized more on effective communication instead of value judgments.

Inductive Risk and Values in Science, Heather Douglas

  • the most important distinction:

“as most funding goes toward “applied” research, and funding that goes toward “basic” research increasingly needs to justify itself in terms of some eventual applicability or use.” pg.577.

I think this applicability criterion is bringing scientists, policy makers, and citizens together, although all three groups can try to impose different values into science.

  • a clarification question/criticism:

“In cases where the consequences of making a choice and being wrong are clear, the inductive risk of the choice should be considered by the scientists making the choice. (…) The externality model is overthrown by a normative requirement for the consideration of non-epistemic values, i.e., non-epistemic values are required for good reasoning.” pg.565.

What if we don’t know the consequences? What are the cases in which being wrong is so clear? I guess I don’t understand why the externality model is overthrown either.

Coming to Terms with the Values of Science: Insights from Feminist Science Studies Scholarship, Alison Wylie, Lynn Hankinson Nelson

  • the most important distinction:

“We have so far emphasized ways in which feminist critics reveal underlying epistemic judgments that privilege simplicity and generality of scope (in the sense of cross‐species applicability) over empirical adequacy, explanatory power, and generality of scope in another sense.(…) the kinds of oversimplification that animate the interest in reductive and determinist accounts of sex difference-played a role in their evaluative judgment that the costs of the tradeoffs among epistemic values characteristic of the standard account were unacceptable.” pg. 16.

I really appreciate this distinction emphasizing ‘simplicity over empirical adequacy’. Apparently, the short-cuts through the best explanation clogged the way of ‘true science’. The kinds of contextual values can advance science not only by including women, but also considering all the under-represented groups in a way that their relationships with their inclusion make a difference. I heard an example from a documentary, which is correlating the ancient beliefs to the natural happenings. I think Aztecs mythology revealed there was a meteorite falling in time and they transferred this event as a story to the today’s people. Before talking to the indigenous people, the huge crater (due to the meteorite) couldn’t be explained. The alternative history and the alternative science are holding hands, nowadays, because of the efforts of feminists and the under-represented groups (which is highly debatable with why there is an ‘over-represented group’, such as white American male domination in power positions, even in conference speeches, keynotes etc.).

  • a clarification question/criticism:

“The prospects for enhancing the objectivity of scientific knowledge are most likely to be improved not by suppressing contextual values but by subjecting them to systematic scrutiny; this is a matter of making the principle of epistemic provisionality active.” pg. 18.

I think this is a bit vague. How can we make the principle of epistemic provisionality active? Whose role is this?

Values and Objectivity in Scientific Inquiry, Helen E. Longino

  • the most important distinction:

“This constitution is a function of decision, choice, values and discovery. (…) thus contextual values are transformed into constitutive values.” pg.100.

Causality in time brings the relations between phenomena. The conceptualized phenomenon is subjected to the historical development of an idea. Therefore, the constitution of the object of the study is affected by the contextual values of its time.

  • a clarification question/criticism:

“Where we do not know enough about a material or a phenomenon (…) the determination by social or moral concerns (…) little to do with the factual adequacy of those procedures.” pg.92.

Here, I can suggest a minor engineering insight. I admit that it cannot be applied to all the cases, but I think it is helpful in many cases. When we design something – it can be a little screw or an aircraft wing- we always consider the worst-case scenarios. If we calculate the maximum stress which a part can handle, we put a safety factor of 5, for example. It means we design a part to compensate for 100 kPa where the max expected pressure would be 20 kPa. We propose a life-time for a product as much as longer than it will be needed. Accordingly, we report all the possibilities could be encountered in which cases. Therefore, even if we don’t know the exact procedure, we can put some safety factors and make them standard. I know, for example, aircraft turbine blades are designed for 6-sigma failure possibilities, which means 1 failure in a million! However, we also know Boeing-Max skipped some safety checks in sensors, resulting in two deadly crashes!

 

 

Older posts «