Dec 26

Can an algorithm predict an unknown physical phenomenon by analyzing patterns and relations buried in clusters of data?

This article investigates whether an algorithm can provide an undiscovered physical phenomenon by detecting patterns in the region where the data collected. The pattern recognition is considered with the basis of inferential statistics, which differs from descriptive statistics, as McAllister implied. I assert that physical patterns should be correlated with mathematical expressions, which are interconnected with the physical (quantitative) laws. The known unknows, e.g. gravitons, and the unknown unknowns, e.g. fifth force of the universe, are examined in the sense of learning capabilities of an algorithm based on empirical data. I claim there is no obstacle preventing algorithms from discovering new phenomena.


Nazli Turan*

*Department of Aerospace and Mechanical Engineering, University of Notre Dame


1.Introduction. The notion of discovery has occupied many philosophers’ mind in a variety of angles. Some tried to formulate a way through a discovery, although some desperately tried to fix the meaning (e.g. the eureka moment) without considering the time as a variable. Larry Laudan pointed out two motives yielding a discovery: pragmatic aspect, which is a search for scientific advancement, innovation or invention; epistemological aspect, aiming to provide well-grounded, sound theories. He made a clear distinction between discovery and justification, attaining the latter as an epistemological problem. Although I’m aware of this distinction, I tend to accept that those are intermingled, even in his example: “Self-corrective logics of discovery involve the application of an algorithm to a complex conjunction which consists of a predecessor theory and a relevant observation. The algorithm is designed to produce a new theory which is truer than the old. Such logics were thought to be analogous to various self-corrective methods of approximation in mathematics, where an initial posit or hypothesis was successively modified so as to produce revised posits which were demonstrably closer to the true value.” (Laudan, 1980). In my understanding, every pre and post processes are a part of justification, while the outcome of an algorithm is a discovery. I will pursue his analogy of self-corrective logics and transform into literal computer algorithms to examine whether a computer algorithm (or artificial intelligence, AI, machine learning program, deep neural network) can reveal undiscovered phenomena of nature.

To decide on whether the algorithm implies a true description of the real world, I rely on the empirical adequacy principle proposed by Van Fraassen. The collected, non-clustered empirical data is the input to an algorithm which is capable of unsupervised learning to avoid user’s biases. The data and resulting conclusions will be domain-specific, meaning; the algorithms, which can only interpolate the relations and patterns buried in the data, are the main concern of this paper, although there are preliminary results for physics-informed neural networks, which have extrapolation capabilities that are not within the train data (Yang, 2019).

In my view, the algorithms of interest are scientific models acting on the structured phenomena (consisting of set of patterns and relations) and utilizing mathematical expressions accompanied with statistical nature of data. The inferential statistics (in Woodward’s sense) is emphasized after making a clear distinction between the systematic uncertainties (due to resolution or calibration of the instruments) and the precision uncertainties (due to sampling of data). McAllister’s opposition to Woodward and his concerns about patterns in empirical data are elaborated in the second section and the examples of probabilistic programming which includes an application of inferential statistics are investigated. The third section will be a discussion of learning and interpolating capabilities of algorithms. Lastly, I will point out the known unknowns (gravitons) and the unknown unknowns (the fifth force) in the scope of conservation laws to discuss the possibility of discovery by computer algorithms.

2. Physical structures have distinguished patterns. Before considering the capabilities of an algorithm, I want to dig into the relation between empirical data and physical structures. There are two questions appearing immediately. First: is the world really a patterned structure? Second: are we able to capture patterns of the physical world analyzing empirical data? These questions have broad background, but I would like to emphasize some important ones here. Specifically, the latter question is my point of interest.

Bogen and Woodward shared their ideas in an atmosphere where the unobservables are still questionable (Bogen, Woodward 1988). They eagerly distinguished data from phenomena, by stating that ‘data, which play the role of evidence for the existence of phenomena, for the most part can be straightforwardly observed. (…) Phenomena are detected through the use of data, but in most cases are not observable in any interesting sense of that term.’ Bogen explains further his views on phenomena: “What we call phenomena are processes, causal factors, effects, facts, regularities and other pieces of ontological furniture to be found in nature and in the laboratory.” (Bogen, 2011). I will assert parallel claims to their statements with some additions. The recorded and collected data is the representation of regularities in phenomena. These regularities may provide a way to identify causes and effects. Apart from these, we are in an era where we’ve come to realize that directly unobservable objects might be real, such as Higgs boson and gravitational waves. Our advancements in building high-precision detectors and in understanding of fundamental interactions paved the way for scientific forward steps. I believe the discussion of observable vs. unobservable objects is not relevant anymore. Therefore, I’m inclined to accept that data provides strong evidence of phenomena whether they are observable or not. Yet, the obtained data should be regulated, clustered, and analyzed by mathematical and statistical tools. Bogen and Woodward was ending their discussion by illuminating both optimistic and pessimistic sides of the way starting from data to phenomena:

“It is overly optimistic, and biologically unrealistic, to think that our senses and instruments are so finely attuned to nature that they must be capable of registering in a relatively transparent and noiseless way all phenomena of scientific interest, without any further need for complex techniques of experimental design and data-analysis.  It is unduly pessimistic to think we cannot reliably establish the existence of entities which we cannot perceive. In order to understand what science can achieve, it is necessary to reverse the traditional, empiricist placement of trust and of doubt.” (Bogen, Woodward 1988)

Inevitably, some philosophers started another discussion of patterns in empirical data, by questioning how to demarcate patterns that are physically significant. James W. McAllister supposed that a pattern is physically significant if it corresponds to a structure in the world, but then he thought all patterns must be regarded as physically significant (McAllister, 2010). I agree with the idea that physical patterns should differ from other patterns. Physical patterns should be correlated with mathematical expressions which are interconnected with the physical laws. Opposing the McAllister’s idea that all patterns are physically significant, I can give an example of Escher’s patterns, which are artistically designed by the Dutch graphic artist M.C. Escher. If we take the color codes or pixels of his paintings as empirical data, sure we can come up with a mathematical expression or pattern to produce his paintings, but that pattern does not correspond to a physical structure. To support my claim, I can provide another example of parameterization to express the designs Escher created. Craig S. Kaplan from Computer Graphics Lab at University of Waterloo approached to the problem with a unique question: Can we automate the discovery of grids by recognizable motifs? His team developed an algorithm that can produce Escher’s patterns (see Fig.1 adapted from the website: Escherization).

Fig. 1. Escherization problem (Kaplan, CraigS.).

If we are convinced that Escher’s patterns are not corresponding to any physical structure in the world, I would like to discuss the physical patterns in data with noise or error terms. I don’t use ‘noise’, because this term is used mostly in signal processing, so it has some limitations in the connotation. Besides, for the people outside the scientific research, ‘noise’ would be conceived as unwanted or irrelevant measurements. ‘Error’ is frequently used in data analysis, but again I prefer to avoid misunderstandings about its physical causes. I will continue with ‘uncertainty analysis’ of the measured data. There are two types of uncertainties in data: systematic uncertainties, which stem from the instrument calibration and resolution, and precision uncertainties, which are due to the repetitive sampling of the system (Dunn, 2010). To analyze these uncertainties, we assume that they have a random (normal/Gaussian) distribution. I want to give an example from my specific area of research: the lifetime of the current peaks (Fig.2). Here in the figure, there are three current peaks and I want to estimate their duration (time in x-axis) by descriptive statistics. I decide to be in a 95% confidence level assuming the normal distribution of current peaks. 12 ns, 14 ns, and 20 ns are measured respectively. I can easily find their mean and the standard deviation of the mean (up), 15.33± 4.16 ns. However, this is not correct. In the figure, you can see step-by-step increments of the current, which is due to the time resolution of the oscilloscope, 2 ns. I need to consider this systematic uncertainty (us) .

Fig. 2. The change in the plasma current in time.

Up to this point, I mostly made some mathematical manipulations to represent data in a compact way. One important assumption was the normal distribution of uncertainties which is observed frequently in nature, for example in human height, blood pressure etc. The other key attempt was to choose a confidence level. Now, it is time to discuss inferential statistics. What type of information I can deduce form empirically measured data? For example, here I observed that the lifetime of the current peaks is increasing from the first one (12 ns) to the third one (20 ns). Is there any physical reason causing this? Can I conduct analysis of variance or can I test my hypothesis that if I run the current for longer time, I will observe longer lifetimes (>20 ns)? Even though scientists would choose to use descriptive statistics largely to show their data in graphs or tables, they are -in general- aware of the causes behind the empirical data yielding some trends or patterns. These patterns are the footprints of the physical structures, in other words, phenomena.

In McAllister’s paper (2010), the term ‘noise’ is used so arbitrarily that he thought noises can add up, although we cannot add them up if the parameters are dependent on each other. He provided an example for the deviations in the length of a day (Dickey 1995). His argument was the noise is not randomly distributed and it has a component increasing linearly per century (1-2 µs per century). All the forces affecting the length of a day are illustrated in Dickey’s paper (Fig.3).

Fig. 3. The forces perturbing Earth’s rotation (Dickey, 1995).

Dickey described all these individual effects as the following:

“The principle of conservation of angular momentum requires that changes in the Earth’s rotation must be manifestations of (a) torques acting on the solid Earth and (b) changes in the mass distribution within the solid Earth, which alters its inertia tensor. (…) Changes in the inertia tensor of the solid Earth are caused not only by interfacial stresses and the gravitational attraction associated with astronomical objects and mass redistributions in the fluid regions of the Earth but also by processes that redistribute the material of the solid Earth, such as earthquakes, postglacial rebound, mantle convection, and movement of tectonic plates. Earth rotation provides a unique and truly global measure of natural and man-made changes in the atmosphere, oceans, and interior of the Earth. “(Dickey, 1995, pg.17)


It is beautiful that a group of researchers spent quite a time to upgrade their measurement tools and the author explains how they tried to reduce the systematic uncertainties by upgrading their tools (pg.21 in the original article). And, the entire paper is about hourly/monthly/yearly patterns affecting the length of a day and how their measurement tools (interferometers) are capable of detecting small changes. Dickey clearly states that ‘an analysis of twenty-five years of lunar laser ranging data provides a modern determination of the secular acceleration of the moon of -25.9 ± 0.5 arcsec/century2 (Dickey et al.,1994a) which is in agreement with satellite laser ranging results.’ He provided the noise term for this specific measurement and there is no clue to assume it is not randomly distributed, as McAllister claimed. Dickey captured the oscillations (patterns) changing in a time-scale of centuries in the data and supported his empirical data by physical explanations; tidal dissipation, occurring both in the atmosphere and the oceans, is the dominant source of variability. After all, one important aspect I wanted to state against McAllister’s account of describing patterns is that the empirical data can yield a causal relation which can be understood as pattern, besides the uncertainties in the sense of inferential statistics might provide information about patterns too.

Inferential statistics entails with a probabilistic approach, in general. One application for computer algorithms is Bayesian probabilistic programming which adopts hypotheses that make probabilistic predictions, such as “this pneumonia patient has a 98% chance of complete recovery” (Patricia J Riddle’s lecture notes). With this approach, algorithms combine probabilistic models with inferential statistics. MIT Probabilistic Computing Project is one of the important research laboratories in this area. They offer probabilistic search in large data, AI assessment of data quality, virtual experiments, and AI-assisted inferential statistics, “what genetic markers, if any, predict increased risk of suicide given a PTSD diagnosis and how confident can we be in the amount of increase, given uncertainty due to statistical sampling and the large number of possible alternative explanations?” (BayesDB). The models that they use in BayesDB are ‘empirical’, so they expect them to be able to interpolate physical laws in regimes of observed data, but they cannot extrapolate the relationship to new regimes where no data has been observed. They showed the data obtained from satellites interpolated to Kepler Law (Saad & Mansinghka, 2016).

3. Empirical data can be interpolated to physical laws. The empirical data sets can be used as raw input to an algorithm which can be trained to distribute data in subsections. If this algorithm can yield relations between data sets, we might be able to obtain physical laws in regimes of observed data. There are two main concerns here: the possibility to impose user’s biases in parametrization and the user’s capability to understand the pointed result. The first obstacle has been overcome by unsupervised learning programs. The latter needs to be elaborated.

Algorithms (both recursive and iterative ones) have black boxes, which promotes an accessibility problem to mid-layers of algorithms.  Emily Sullivan states ‘modelers do not fully know how the model determines the output’ (Sullivan, 2019). She continues:

“(…) if a black box computes factorials and we know nothing about what factorials are, then our understanding is quite limited. However, if we already know what factorials are, then this highest-level black box, for factorials, turns into a simple implementation black box that is compatible with understanding. This suggests that the level of the black box, which is coupled with our background knowledge of the model and the phenomenon the model bears on (…)” (Sullivan, 2019)

She has a point to state the relation between black boxes and our understanding; however, this relation does not override the possibility of discoveries. The existence of black boxes is not a limitation for an algorithm which may yield unknown physical relations. The limitation is our understanding. As once Lynch named his book, Knowing More and Understanding Less in the Age of Big Data, we’re struggling in between knowing and understanding. On the other hand, algorithms are acting on the side of ‘learning’. Although the concept of learning is worth to be discussed philosophically, I will consider it in the sense of computer (algorithm) learning. There are seven categories of algorithm learning: learning by direct implanting, learning from instruction, learning by analogy, learning from observation and discovery, learning through skill refinement, learning through artificial neural networks, learning through examples (Kandel, Langholz 1992). For the case of learning from observation and discovery, the unsupervised learner reviews the prominent properties of its environment to construct rules about what it observes. GLAUBER and BACON are two examples investigated in elsewhere (Langley, 1987). These programs bring in ‘non-trivial discoveries in the sense of descriptive generalizations’ and their capabilities to generate novelties have been questioned (Ratti, 2019). The novelties might be subtle, but they are still carrying the properties of being a novelty which was hidden to human knowledge before.

I realized that people tend to separate qualitative laws and quantitative laws, implying that algorithms are not effective for the laws of qualitative structures because human integration is necessary to evaluate novelties (Ratti, 2019). I want to redefine those by drawing the inspiration from Patricia J Riddle’s class notes on ‘Discovering Qualitative and Quantitative Laws’ such that, qualitative descriptions state generalizations about classes, whereas quantitative laws express a numeric relationship, typically stated as an equation and refer to a physical regularity. I’m hesitant to call qualitative descriptions as laws because I believe this confusion stems from the historical advancement of biology and chemistry. Here, I want to refer to Svante Arrhenius (Ph.D., M.D., LL.D., F.R.S. Nobel Laureate Director of The Nobel Institute of Physical Chemistry, pretty impressive title!) who wrote a book ‘Quantitative Laws in Biological Chemistry’ in 1915. In the first chapter of his book, he emphasized the historical methods of chemistry:

“As long as only qualitative methods are used in a branch of science, this cannot rise to a higher stage than the descriptive one. Our knowledge is then very limited, although it may be very useful. This was the position of Chemistry in the alchemistic and phlogistic time before Dalton had introduced and Berzelius carried through the atomic theory, according to which the quantitative composition of chemical compounds might be determined, and before Lavoisier had proved the quantitative constancy of mass. It must be confessed that no real chemical science in the modern sense of the word existed before quantitative measurements were introduced. Chemistry at that time consisted of a large number of descriptions of known substances and their use in the daily life, their occurrence and their preparation in accordance with the most reliable receipts, given by the foremost masters of the hermetic {i.e. occult) art.” (Arrhenius, 1915)

The point I want to make is that our limited knowledge might require qualitative approaches, for example, in biology; however, the descriptive processes can only be an important part of tagging clusters while the mathematical expressions derived from the empirical data show the quantitative laws of nature. Furthermore, our knowledge and the presence of black boxes are not obstacles for unsupervised learning algorithms which can nicely cluster and tag data sets and demonstrate relations between them. As an example, a group of researchers showed that their algorithm can ‘discover physical concepts from experimental data without being provided with additional prior knowledge’. The algorithm (SciNet) discovers the heliocentric model of the solar system — that is, it encodes the data into the angles of the two planets as seen from the Sun (Iten, Metger 2018). They explained the encoding and decoding processes by comparing human and machine learning (Fig.4).

Fig. 4. (a) Human learning, (b) machine learning (Iten, Metger, 2018).


In human learning, we use representations (the initial position, velocity at a point etc.) not the original data. As mentioned in the paper, the process of producing the answer (by applying a physical model to the representation) is called decoding. The algorithm mentioned here produces latent representations by compressing empirical data. It utilizes probabilistic encoder and decoder during the process. As a result, researchers were able to recover physical variables from experimental data. SciNet learns to store the total angular momentum, a conserved quantity of the system to predict the heliocentric angles of Earth and Mars. It is clear that an unsupervised algorithm can capture physical laws (ie. conservation laws) through empirical data.

4. An algorithm can reveal unknown physical phenomena by analyzing patterns. Now, I want to explore the territory of unknown. Until now, I tried to show that physical structures have patterns which can be represented by mathematical expressions accompanied with inferential statistics. The expressions are correlated to the quantitative laws which govern the relations and interactions of phenomena. The observed empirical data includes a set of patterns and uncertainties. An unsupervised algorithm can take the empirical data and proceed descriptive tagging by itself. The layered process might be inaccessible to the user, but the black boxes do not prevent the discovery of physical laws. Therefore, an algorithm can interpolate the data in the observed region to physical laws. This process is a learning process for an algorithm although it is a little different than human learning.

As a curious human being, I wonder if algorithms would generate new physical structures that are unknown to today’s people. The first question is: why those structures are unknown? I can provide two primary reasons for this: either our instruments require some improvements to catch small changes buried in signal, or our understanding is limited to attain some meanings to observations. Gravitational waves were hypothetical entities until we upgraded our interferometers to capture the waves transmitted after the collision of two massive black holes. To tackle the limitations of our understanding is more challenging; however, history embodies many paradigm shifts such as the understanding of statistical thermodynamics, general relativity or wave-particle duality. If you would go to 15th century and tell the scientists of the time; the cause of static electric (electrons) follows uncertainty principle, so you cannot observe it when you measure it, the scientists would laugh at you so badly. Therefore, I never eliminate chances to find unknowns in science. This unknown can be a new particle, force, interaction, or process. Whatever the unknown is, I will assume it is a part of physical phenomena.

The second important question is: can we trust algorithms to find unknowns? First of all, the trained algorithms -supervised or unsupervised- have capability to learn, as I described in the previous section. To the best of my knowledge, such algorithms proved themselves to obtain Kepler laws, conservation laws and Newton laws (Saad & Mansinghka, 2016, Iten, Metger, 2018). That is to say, these algorithms are discovering the natural laws which were unknown to the 15th century humanity. I support that algorithms might provide the next generation physical laws or related phenomena and I don’t see any obstacle to prevent them to discover something new.

The third question would be: are those unknowns epistemically accessible to human understanding? Although this question is somewhat related to the first one, I want to explicitly state two possible unknowns: known unknowns and unknown unknowns. The first one would be claimed to exist but never detected before, such as gravitons, the carrier of gravitational force. The second one; however, would be never predicted to exist, such as the fifth force of nature. The most prominent challenge to detect a graviton is the impossible design of the detector which will weigh as much as the mass of Jupiter (Rothman & Boughn, 2006). The second problem is the nature itself; to detect a single graviton, we need to shield 1033 neutrino events and we do not have a monochromatic graviton source. That’s why graviton is accepted as a hypothetical particle. However, we cannot rule out its existence either. The latest LIGO interferometers collected data to prove the existence of gravitational waves, and some think this data can be used to predict the properties of gravitons (Dyson, 2014). Why don’t we feed our algorithms with these data? Maybe, they can provide the relations in deep data and eventually we can complete the puzzle of the standard model. The overarching goal to give this example is to emphasize the possibility of discovering unknowns. I must accept that empirical data might not guarantee to yield a solid result (i.e. the existence of gravitons); however, algorithm can point out the possibilities buried in data in a probabilistic way such as the probable mass of gravitons in certain limits.

Alternatively, algorithms might reveal unknown unknowns as a result. Before discussing this, I want to examine one of the latest findings in experimental physics: the fifth force. Physicist claim they’ve found a new force of nature (Krasznahorkay, 2019). The excited (energized) new particle decayed and emitted light which has been detected by a double-sided silicon strip detector (DSSD) array. The mass of the new force carrier particle is predicted by the conservation of energy. “Based on the law of conservation of energy, as the energy of the light producing the two particles increases, the angle between them should decrease. Statistically speaking, at least. (…) But if this strange boson isn’t just an illusion caused by some experimental blip, the fact it interacts with neutrons hints at a force that acts nothing like the traditional four.” (Mcrae, 2019).

The existence of a fifth force was totally unknown until the last month. Although it requires further evidences from other labs, no one was able to find misinterpretation of data, as far as I know. This does not mean that the finding is undeniable, but there is an anomaly regarding the law of conservation of energy. Previously, I showed some example algorithms yielding the conservation laws. My claim is that if an algorithm is able to capture the conservation laws, then it might reveal the relations of a new phenomenon, in this case, a new force! A force requires a carrier particle, which they termed the X17 particle with mass mXc2=16.70±0.35(stat) ±0.5(sys) MeV, where the statistical and systematic uncertainties are stated clearly (Krasznahorkay, 2019). My far-fetching point would be a humble suggestion for the researchers working on this type of projects where the detector data can be analyzed by algorithms to see if they can reveal a new relation.

5. Conclusion. The motivation to write this article was to ignite a spark about future possibilities of unsupervised learning algorithms. Although the capabilities of pattern recognition and generating novelties are questionable due to the qualitative aspect of the phenomena and the human factor involving while detecting new discoveries and understanding what the algorithm provides, I’m very optimistic about probabilistic programming construing with inferential statistics. Here, I showed the empirical data has distinguishable patterns than any other patterns (for example, artistic ones). The physical pattern can be represented by mathematical relations which are the traces of the physical phenomena. There are many algorithms capturing the physical relations based on the patterns in empirical data. They can also recover the physical laws of nature. Regarding all these points I made, I claim we would be able to discover new physical phenomena in the real word through algorithms. Apart from the incapable instrumental tools and our limited understanding, I don’t think there is a hurdle embedded inherently in a running algorithm. The unknowns are unknown to human being, not to the computer algorithms.



Laudan L. (1980). Why was the Logic of Discovery Abandoned?. In: Nickles T. (eds) Scientific Discovery, Logic, and Rationality. Boston Studies in the Philosophy of Science, vol 56. Springer, Dordrecht.

Van Fraassen, B. (1980). Arguments concerning scientific realism (pp. 1064-1087).

Yang, X. I. A., Zafar, S., Wang, J. X., & Xiao, H. (2019). Predictive large-eddy-simulation wall modeling via physics-informed neural networks. Physical Review Fluids, 4(3), 034602.

Woodward, J. (2010). Data, phenomena, signal, and noise. Philosophy of Science, 77(5), 792-803.

McAllister, J. W. (2010). The ontology of patterns in empirical data. Philosophy of Science, 77(5), 804-814.

Bogen, J., & Woodward, J. (1988). Saving the phenomena. The Philosophical Review, 97(3), 303-352.

Bogen, J. (2011). ‘Saving the phenomena’ and saving the phenomena. Synthese, 182(1), 7-22.

Kaplan, Craig S. (2000). Escherization

Dunn, P. (2010). Measurement and data analysis for engineering and science (2nd ed.). Boca Raton, FL: CRC Press/Taylor & Francis.

Dickey, J. O. (1995). Earth rotation variations from hours to centuries. Highlights of Astronomy, 10, 17-44.

Mansinghka, Vikash K. (2019). BayesDB.

Sullivan, E. (2019). Understanding from machine learning models. British Journal for the Philosophy of Science.

Lynch, M. P. (2016). The Internet of us: Knowing more and understanding less in the age of big data. WW Norton & Company.

Kandel, A., & Langholz, G. (1992). Hybrid architectures for intelligent systems. CRC press.

Langley, P., Simon, H. A., Bradshaw, G. L., & Zytkow, J. M. (1987). Scientific discovery: Computational explorations of the creative processes. MIT press.

Ratti, E, (2019). What kind of novelties can machine learning possibly generate? The case of genomics. (unpublished manuscript).

Riddle, Patricia J. (2017). Discovering Qualitative and Quantitative Laws.

Arrhenius, S. (1915). Quantitative laws in biological chemistry (Vol. 1915). G. Bell.

Iten, R., Metger, T., Wilming, H., Del Rio, L., & Renner, R. (2018). Discovering physical concepts with neural networks. arXiv preprint arXiv:1807.10300.

Saad, F., & Mansinghka, V. (2016). Probabilistic data analysis with probabilistic programming. arXiv preprint arXiv:1608.05347.

Rothman, T., & Boughn, S. (2006). Can gravitons be detected?. Foundations of Physics, 36(12), 1801-1825.

Dyson, F. (2014). Is a graviton detectable?. In XVIIth International Congress on Mathematical Physics (pp. 670-682).

Krasznahorkay, A. J., Csatlos, M., Csige, L., Gulyas, J., Koszta, M., Szihalmi, B., … & Krasznahorkay, A. (2019). New evidence supporting the existence of the hypothetic X17 particle. arXiv preprint arXiv:1910.10459.

Mcrae, M. (2019). Physicists Claim They’ve Found Even More Evidence of a New Force of Nature.

Dec 26

Values in Science

In defence of the value free ideal, Gregor Betz

  • the most important distinction:

“The methodological critique is not only ill-founded, but distracts from the crucial methodological challenge scientific policy advice faces today, namely the appropriate description and communication of knowledge gaps and uncertainty.”

The acceptance of miscommunication between scientists and policy makers is an important step to illuminate public on crucial issues.

  • a clarification question/criticism:

“Scientists are expected to answer these questions with “plain” hypotheses: Yes, dioxins are carcinogenic; or: no, they aren’t. The safety threshold lies at level X; or: it lies at level Y.”

I think those are not even hypotheses. A hypothesis should clarify a problem/challenge at certain conditions. For example, ‘If this …, then that …’ would be a candidate of a hypothesis but ‘plain’ statements are not a part of scientific communication. I feel that throughout the paper, the author emphasized more on effective communication instead of value judgments.

Inductive Risk and Values in Science, Heather Douglas

  • the most important distinction:

“as most funding goes toward “applied” research, and funding that goes toward “basic” research increasingly needs to justify itself in terms of some eventual applicability or use.” pg.577.

I think this applicability criterion is bringing scientists, policy makers, and citizens together, although all three groups can try to impose different values into science.

  • a clarification question/criticism:

“In cases where the consequences of making a choice and being wrong are clear, the inductive risk of the choice should be considered by the scientists making the choice. (…) The externality model is overthrown by a normative requirement for the consideration of non-epistemic values, i.e., non-epistemic values are required for good reasoning.” pg.565.

What if we don’t know the consequences? What are the cases in which being wrong is so clear? I guess I don’t understand why the externality model is overthrown either.

Coming to Terms with the Values of Science: Insights from Feminist Science Studies Scholarship, Alison Wylie, Lynn Hankinson Nelson

  • the most important distinction:

“We have so far emphasized ways in which feminist critics reveal underlying epistemic judgments that privilege simplicity and generality of scope (in the sense of cross‐species applicability) over empirical adequacy, explanatory power, and generality of scope in another sense.(…) the kinds of oversimplification that animate the interest in reductive and determinist accounts of sex difference-played a role in their evaluative judgment that the costs of the tradeoffs among epistemic values characteristic of the standard account were unacceptable.” pg. 16.

I really appreciate this distinction emphasizing ‘simplicity over empirical adequacy’. Apparently, the short-cuts through the best explanation clogged the way of ‘true science’. The kinds of contextual values can advance science not only by including women, but also considering all the under-represented groups in a way that their relationships with their inclusion make a difference. I heard an example from a documentary, which is correlating the ancient beliefs to the natural happenings. I think Aztecs mythology revealed there was a meteorite falling in time and they transferred this event as a story to the today’s people. Before talking to the indigenous people, the huge crater (due to the meteorite) couldn’t be explained. The alternative history and the alternative science are holding hands, nowadays, because of the efforts of feminists and the under-represented groups (which is highly debatable with why there is an ‘over-represented group’, such as white American male domination in power positions, even in conference speeches, keynotes etc.).

  • a clarification question/criticism:

“The prospects for enhancing the objectivity of scientific knowledge are most likely to be improved not by suppressing contextual values but by subjecting them to systematic scrutiny; this is a matter of making the principle of epistemic provisionality active.” pg. 18.

I think this is a bit vague. How can we make the principle of epistemic provisionality active? Whose role is this?

Values and Objectivity in Scientific Inquiry, Helen E. Longino

  • the most important distinction:

“This constitution is a function of decision, choice, values and discovery. (…) thus contextual values are transformed into constitutive values.” pg.100.

Causality in time brings the relations between phenomena. The conceptualized phenomenon is subjected to the historical development of an idea. Therefore, the constitution of the object of the study is affected by the contextual values of its time.

  • a clarification question/criticism:

“Where we do not know enough about a material or a phenomenon (…) the determination by social or moral concerns (…) little to do with the factual adequacy of those procedures.” pg.92.

Here, I can suggest a minor engineering insight. I admit that it cannot be applied to all the cases, but I think it is helpful in many cases. When we design something – it can be a little screw or an aircraft wing- we always consider the worst-case scenarios. If we calculate the maximum stress which a part can handle, we put a safety factor of 5, for example. It means we design a part to compensate for 100 kPa where the max expected pressure would be 20 kPa. We propose a life-time for a product as much as longer than it will be needed. Accordingly, we report all the possibilities could be encountered in which cases. Therefore, even if we don’t know the exact procedure, we can put some safety factors and make them standard. I know, for example, aircraft turbine blades are designed for 6-sigma failure possibilities, which means 1 failure in a million! However, we also know Boeing-Max skipped some safety checks in sensors, resulting in two deadly crashes!



Dec 26

Scientific Reductionism

Issues in the Logic of Reductive Explanations, Ernest Nagel


  • the most important distinction:

“Bridge laws state what relations presumably obtain between the extensions of their terms, so that in favorable cases laws of the “narrower” theory (with suitable qualifications about their approximate character) can be deduced from the “wider” theory, and thereby make intelligible why the two theories may have a common field of application.” pg.920.

I do believe reductionism is a natural process when we are dealing with a specific problem. We cannot start to solve the problem of high carbon emission due to combustion products by considering the fundamental particles and the formation of universe, which are the candidates of the widest theories, for example. But, we can specifically yield a theory of molecular interactions including organic compounds and their burning rates with oxygen.

  • a clarification question/criticism:

“the meaning of every term occurring in a theory or in its observation statements is wholly and uniquely determined by that theory, so that its meaning is radically changed when the theory is modified. (…) it does not follow that there can be no term in a theory which retains its meaning when it is transplanted into some other theory.” pg.919.

I think Feyerabend has a point to emphasize on changes in meaning. I would like to recall Hilary Putnam’s referential model of meaning, which states 4 parts. And, it is highly possible to lose or change one part during theorizing a phenomenon.

1953 and all That. A Tale of Two Sciences, Philip Kitcher

  • the most important distinction:

“It will not simply consist in a chemical derivation adapted with the help of a few. boundary conditions furnished by biology. Instead, we shall encounter a sequence of subarguments: molecular descriptions lead to specifications of cellular properties, from these specifications we draw conclusions about cellular interactions, and from these conclusions we arrive at further molecular descriptions. There is clearly a pattern of reasoning here which involves molecular biology and which extends the explanations furnished by classical genetics by showing how phenotypes depend upon genotypes” pg.367.

This kind of pattern of reasoning can pave the way for bridge theories, too.

  • a clarification question/criticism:

“Later theories can be said to provide conceptual refinements of earlier theories when the later theory yields a specification of entities that belong to the extensions of predicates in the language of the earlier theory, with the result that the ways in which the referents of these predicates are fixed are altered in accordance with the new specifications.” pg.363.

Conceptual refinements… Is there a clear distinction between reductions and refinements? I don’t know.

Nov 23

Fifth force?! Wow!

It is happening, and it is happening now. A group of Hungarian researchers showed evidences of a possible-fifth force in the universe! If their results can be reproduced by the others, we know exactly who will get the next Nobel prize!

This is the website I first read all the details: Physicists Claim They’ve Found Even More Evidence of a New Force of Nature

This is a fancy look: A ‘no-brainer Nobel Prize’: Hungarian scientists may have found a fifth force of nature



Nov 23

Laws of Nature

Pragmatic Laws, Sandra D. Mitchell

  • the most important distinction:

“The function of scientific generalizations is to provide expectations of the occurrence of events and patterns of properties. (…) To know when to rely on a generalization to know when it will apply, and this can be decided only under what specific conditions it has applied before.” pg.477.

The patterns and conditions are two distinct words here. For example, if you’re working with the charged particles, you should be aware of their response to the electrical and magnetic fields, before thinking the gravitational force, which is Cartwright’s obsession. Fields provide patterns but under which conditions? How strong is the magnetic field, for example? Those are the questions you should ask if you want to generalize a rule.

  • a clarification question/criticism:

“The conditions upon which physical laws are contingent may be more stable through space and time than the contingent relations described in biological laws.” pg.478.

I could not see the primary difference between the physical and biological laws. If they have different traits, then we should question the properties and definition of a law at the beginning. But, this might be the normative approach.

Do the Laws of Physics State the Facts? Nancy Cartwright

  • the most important distinction:

“I have argued, a trade‐off between factual content and explanatory power. We explain certain complex phenomena as the result of the interplay of simple, causal laws.” pg.15.

I think she is trying to show simplifications and generalizations would result in lack of facticity. It is worth to discuss.

  • a clarification question/criticism:

“But this law of causal action is highly specific to the situation and will not work for combining arbitrary influences studied by transport theory.” pg. 9.

Fick’s Law is a specific case of the transport equation, if viscosity, body force and bulk velocity are comparably less significant. She is taking a specific case and trying to generalize a rule but the process should be the opposite. The general law is the conservation of momentum and it is represented by the transport equation. Fick’s law is a sub-case of the conservation law. Also, at pg 6, Coulomb’s Law is stated wrong! I just wanted to congratulate the editors of this article (if any)!

Nov 23

Scientific Understanding

The Epistemic Value of Understanding, Henk W. de Reg 

  • the most important distinction:

“Scientists may prefer theories with particular pragmatic virtues because they possess the skills to construct models for explaining phenomena on the basis of these theories. In other words, they have pragmatic understanding UT of such theories. I suggest to rephrase this with the help of the notion of intelligibility. If scientists understand a theory, the theory is intelligible to them.” pg.593.

When I first think of the pragmatic virtues, I thought those are related to the application of theories such that these virtues are the distinction point of being a scientist vs. being an engineer. However, as I read through, I realized the author is actually mentioning a way of theory choice based upon the pragmatic understanding of theories.

  • a clarification question/criticism:

“The fact that deductive reasoning—and accordingly deductive-nomological explanation—involves skill and judgment has two important implications. First, skills cannot be acquired from textbooks but only in practice, because they cannot be translated in explicit sets of rules. Accordingly, to possess a skill is to have ‘tacit knowledge’.” pg.589.

I somewhat agree with this statement, but the following explanations are not really supporting this idea. There are examples for implicit learning, unconscious and unintentional learning, internalizing rules and developing cognitive skills as a physical skill. However, the author then wrote the existence of such a mind is also problematic. So, I’m not clear on how skills can be developed excluding explicit sets of rules.

Idealization and the Aims of Science, Potochnik

  • the most important distinction:

“Understanding is at once a cognitive state and an epistemic achievement. Because understanding is a cognitive state, it depends in part on the psychological characteristics of those who seek to understand.” pg.94.

The important distinction here is the statement of ‘who seek to understand’, which emphasize a certain level of human cognition. This is also what makes a person a scientist.

  • a clarification question/criticism:

“But these idealizations are specific to their purposes. This requires focus on one particular scientific aim (at a time), and one particular deployment of that aim, to the exclusion of others.” pg.108.

I believe idealizations can be made to reach a general rule or law, excluding any specific cases. For example, ideal gas law we learned in high school is general enough to be applied in both near vacuum and high pressures. If we want to be specific, we may use a modified version of the equation by including relative humidity etc. Therefore, idealizations can aim a broader view of the topic instead of being so specific.

Nov 23

Scientific Explanations

Design explanation: determining the constraints on what can be alive, Arno G. Wouters

  • the most important distinction:

“Yet, unlike accidental generalities and like causal relations, functional dependencies are in a sense physically necessary: an organism that has the dependent trait cannot be alive (or will be less viable) if it has the alternative trait instead of the needed one. In other words, a functional dependency is a constraint on what can be alive.” pg.75.

I think it is important for the philosophers’ perspective that there are boundary conditions and initial conditions for every physical system depending on their scale in space (dimensions in nm, um etc.) and their presence in time (e.g. how much time a spaceship needs to decelerate). Here, the author is making an important attempt to state a constraint to be alive by elucidating functional dependencies.

  • a clarification question/criticism:

“They (functional dependence relations) are synchronic in the sense that the need must be satisfied at the time that the demand arises.” pg.75.

I understood this statement as the demand (more oxygen in blood system) and supply (lungs) should be available at the same time. However, evolution is nothing like a lightening. It is more like a process and it cannot be understood if we don’t consider the changes happening in the environmental system. For example, fish used gill in water and took the solvated oxygen in water. But, before then, there were not enough oxygen in the atmosphere for a very long time. First, oxygen had been released from the oxidized rocks, then oxygen species had resolved in oceans. During this time, there were only little planktons and bacteria. As the level of oxygen increased in oceans, evolution made a progress emerging fish with gills. My point is that the author never mentioned the effects of environment itself directing evolution and causing natural selection. The demand arises not only due to the creator’s needs, but also due to the changes in surrounding and this happens in a long time period.

Why Ask, “Why?”? An Inquiry Concerning Scientific Explanation, Wesley C. Salmon

  • the most important distinction:

“Developments in twentieth‐century science should prepare us for the eventuality that some of our scientific explanations will have to be statistical—not merely because our knowledge is incomplete (as Laplace would have maintained), but rather because nature itself is inherently statistical.” pg.6.

I just wanted to say, ‘thank you!’ for this beautiful sentence. Actually, this sentence is also beating the deterministic view of Laplace and others. For example, statistical thermodynamics, which is the first step before diving into the particle’s world and quantum mechanics, has been empowered by Boltzmann, and he was so alone to defend his ideas and eventually he committed suicide. I’m still having trouble to understand why the acceptation of a statistical world required too much effort in human’s mind.

  • a clarification question/criticism:

“The transmission of light from one place to another, and the motion of a material particle, are obvious examples of causal processes. The collision of two billiard balls, and the emission or absorption of a photon, are standard examples of causal interactions.” pg.8.

Transmission, emission and absorption are three modes of radiation acting on a surface, so those are basically light-electron interactions, although some say transmission is just a passing wave. I think, the separation of transmission to state it as a causal process is wrong in the sense of radiation. I wouldn’t distinguish causal process and causal interaction, as Hume suggested but I agree with the idea that cause and effect are more analogous to continuous processes which brings interactions into play.

Nov 23

Models and Representation

Models and Representation, I. G. Hughes

  • the most important distinction:

“The requirement of empirical adequacy is thus the requirement that interpretation is the inverse of denotation.” pg. 333.

Overall, I’m impressed by this simple sentence because I can correlate this with the constructive empiricism of Van Fraassen, asserting the acceptance of a theory with the belief that it is empirically adequate. In my view, interpretation (in the sense of this reading) is mapping of a broader theory, which is postulated in the earlier stages of research by denotations. Demonstration step is related to the mathematical or material model, providing empirical adequacy as a bridge between denotation and interpretation.

  • a clarification question/criticism:

“Galileo’s strategy is to take a problem in physics and represent it geometrically. The solution to the problem is then read off from the geometrical representation. In brief, he reaches his answer by changing the question; a problem in kinematics becomes a problem in geometry. This simple example suggests a very general account of theoretical representation. I call it the DDI account.” pg. 327.

I don’t think this example is a good start point to introduce a new account because it’s focusing on representation/denotation step mostly. For demonstration, he would mention the mathematical expression of this motion (x=1/2*a*t^2) or would show a result of an experiment (e.g. a car is moving from the city A to the city B).

Models and Fiction, Roman Frigg

  • the most important distinction:

“What is missing in the structuralist conception is an analysis of the physical character of model systems. (…) If the Newtonian model system of sun and earth were real, it would consist of two spherical bodies with mass and other concrete properties such as hardness and color, properties that structures do not have; (…) “ pg.253.

I appreciate his distinction for the model systems to be real or hypothetical entities. It sounds a bit cheesy that the Newtonian model system should consist of real spherical bodies with hardness and color (?!). If the model system is designed to show the gravitational force between two celestial objects, then why do we care about their hardness or color? The model describes the force, not the solid objects.

  • a clarification question/criticism:

“Hence, the essential difference between a fictional and non-fictional text lies in what we are supposed to do with it: a text of fiction invites us to imagine certain things while a report of fact leads us to believe what it says.” pg. 260.

So, what if we read a text from an unknown author on the weather predictions for the next 20 years? Let’s say he/she writes about the expected climate changes in South Bend area, and saying, ‘the Lake Michigan will evaporate quickly and will trigger tornados almost every week during spring and summer.’ How can you decide if this is a part of a horror novel or scientific fact? I think there should be more distinctive features in scientific texts, such as reliability, testability, fallibility, the power of its predictions, applicability etc. (Thanks Sir Karl, again!)

Oct 14

History and Philosophy of Science

– What is, according to Chang, the complementary function of history and philosophy of science?

  • Philosophy and history together provide an organized skepticism and criticism towards the undiscovered and forgotten parts of science. He discussed that the eliminated scientific theories can be brought back to the scientists’ attention. Although he supported his ideas with acceptable examples, such as cold radiation, which I have zero intuition about it, I tend to believe more on the survival of the best explanations, as discussed in Van Fraassen. I couldn’t see the pragmatic of reconsidering old, forgotten, somehow falsified and yet weakly standing theories.
  • I believe the survival of the best explanation and rediscovering an old theory are not contradictory. The explanations survive if they meet the needs of their time and pave the way for new predictions or discoveriesuntil they are falsified by new tests. In contemporary science, we can still discard the survived theories based on new findings, brought by new theories or tools enabled by recent technologies. Here, we can go back to an old theory, satisfying the conditions we’re dealing with. However, this old theory now holds because of new technologies and advanced techniques (i.e. computational models) . Now, that old theory transforms into the best explanation surviving.  In the case of heliocentric theory, people took it into consideration again because of the enhancement in observations, simply telescopes. Therefore, I don’t see the necessity to go back in history and find a theory in dusty shelves, if we already have a valid and working model/theory.


– What is a problematic in Pitt’s article?

  • A problematic is a set of intellectual concerns required to be explored by scientists. They can be found in a context with relations, but they can evolve and change in time, creating their own history. That’s why we should admit that they can be considered in multiple contexts and we must avoid choosing the appropriate context to support our ideas.

Sep 23

Underdetermination in Science

– Do you agree with Duhem that there cannot be crucial experiments? Can you make an example?

  • I think there might be crucial experiments, but they do not guarantee the full description or explanation of a phenomenon. I want to give an example from my field of research (plasma catalysis). The catalytic reactions have been studied for years to increase the production rate of valuable products. Once we combine those reactions with some plasma applications, we observed an enhanced conversion of input gas flow (for example in the production of ammonia by nitrogen and hydrogen). Some thought plasma is affecting the catalytic materials, while the others believe the material is making some changes on plasma properties. Our group showed in the first experiment that there is no change in the macroscopic properties of plasma. Then, we designed an experiment with some expensive tools to see if there is any change on the catalytic material due to plasma. So, the second experiment seems so crucial to us to end the discussion on what is responsible for the enhancement. However, as I stated before, we didn’t probe the plasma in atomic level, it was a lumped study, which may cover the possible changes on plasma. In Duhem’s sense, there is no crucial experiments, but I believe an experiment might be still crucial, if its result will provide an answer to strengthen a possible explanation.


– In your opinion, what are the consequences for the rationality of science if we accept that theory choice does not work as an algorithm but it is influenced by values?

  • As Kuhn suggested, the choices scientists rely on are affected by objective -shared- criteria as well as subjective factors. The algorithm is the part of objective process, mostly, because it returns a result based on predefined rules. However, this algorithm also requires some input to start with, which makes ‘the scientific algorithm’ shaky. The input information may come from the previous theories or experiments, that those are possibly influenced by the tradition or even ‘the spirit of the time’, if we think of geocentric theory as an example. On the other hand, science is still reliable and rational due to its testability and fallibility (yes, I love Popper).  The geocentric theory was applicable to how stones fall, how water pumps function etc., but it required a stationary Earth. Many astronomers questioned this: “Ok, we observed other planets moving around, then why our lovely Earth is stationary?”. Because, they were not able to understand the role of gravity in a way that keeping the planets in their orbits and preventing people and atmosphere to fly away, till someone called Newton proved mathematically by inventing differential calculus. Is this theory testable, even though he just showed mystical mathematical expressions? Absolutely yes! Look around! You can easily see that ocean is moving back and forth at coasts due to tidal force between Earth and Moon, exactly how Newton describes. So, that is highly possible that theory choice is a combination of values and criteria, but once we accept that point of view, we’re also framing an unshakable stage for science itself and questioning its rationality is out of game, I guess.

Older posts «