This article investigates whether an algorithm can provide an undiscovered physical phenomenon by detecting patterns in the region where the data collected. The pattern recognition is considered with the basis of inferential statistics, which differs from descriptive statistics, as McAllister implied. I assert that physical patterns should be correlated with mathematical expressions, which are interconnected with the physical (quantitative) laws. The known unknows, e.g. gravitons, and the unknown unknowns, e.g. fifth force of the universe, are examined in the sense of learning capabilities of an algorithm based on empirical data. I claim there is no obstacle preventing algorithms from discovering new phenomena.
Author: Nazli Turan
Values in Science
In defense of the value free ideal, Gregor Betz
- the most important distinction:
“The methodological critique is not only ill-founded, but distracts from the crucial methodological challenge scientific policy advice faces today, namely the appropriate description and communication of knowledge gaps and uncertainty.”
Scientific Reductionism
Issues in the Logic of Reductive Explanations, Ernest Nagel
- the most important distinction:
“Bridge laws state what relations presumably obtain between the extensions of their terms, so that in favorable cases laws of the “narrower” theory (with suitable qualifications about their approximate character) can be deduced from the “wider” theory, and thereby make intelligible why the two theories may have a common field of application.” pg.920.
I do believe reductionism is a natural process when we are dealing with a specific problem. We cannot start to solve the problem of high carbon emission due to combustion products by considering the fundamental particles and the formation of universe, which are the candidates of the widest theories, for example. But, we can specifically yield a theory of molecular interactions including organic compounds and their burning rates with oxygen.
- a clarification question/criticism:
“the meaning of every term occurring in a theory or in its observation statements is wholly and uniquely determined by that theory, so that its meaning is radically changed when the theory is modified. (…) it does not follow that there can be no term in a theory which retains its meaning when it is transplanted into some other theory.” pg.919.
I think Feyerabend has a point to emphasize on changes in meaning. I would like to recall Hilary Putnam’s referential model of meaning, which states 4 parts. And, it is highly possible to lose or change one part during theorizing a phenomenon.
1953 and all That. A Tale of Two Sciences, Philip Kitcher
- the most important distinction:
“It will not simply consist in a chemical derivation adapted with the help of a few. boundary conditions furnished by biology. Instead, we shall encounter a sequence of subarguments: molecular descriptions lead to specifications of cellular properties, from these specifications we draw conclusions about cellular interactions, and from these conclusions we arrive at further molecular descriptions. There is clearly a pattern of reasoning here which involves molecular biology and which extends the explanations furnished by classical genetics by showing how phenotypes depend upon genotypes” pg.367.
This kind of pattern of reasoning can pave the way for bridge theories, too.
- a clarification question/criticism:
“Later theories can be said to provide conceptual refinements of earlier theories when the later theory yields a specification of entities that belong to the extensions of predicates in the language of the earlier theory, with the result that the ways in which the referents of these predicates are fixed are altered in accordance with the new specifications.” pg.363.
Conceptual refinements… Is there a clear distinction between reductions and refinements? I don’t know.
Fifth force?! Wow!
It is happening, and it is happening now. A group of Hungarian researchers showed evidences of a possible-fifth force in the universe! If their results can be reproduced by the others, we know exactly who will get the next Nobel prize!
This is the website I first read all the details: Physicists Claim They’ve Found Even More Evidence of a New Force of Nature
This is a fancy look: A ‘no-brainer Nobel Prize’: Hungarian scientists may have found a fifth force of nature
Laws of Nature
Pragmatic Laws, Sandra D. Mitchell
- the most important distinction:
“The function of scientific generalizations is to provide expectations of the occurrence of events and patterns of properties. (…) To know when to rely on a generalization to know when it will apply, and this can be decided only under what specific conditions it has applied before.” pg.477.
Scientific Understanding
The Epistemic Value of Understanding, Henk W. de Reg
- the most important distinction:
“Scientists may prefer theories with particular pragmatic virtues because they possess the skills to construct models for explaining phenomena on the basis of these theories. In other words, they have pragmatic understanding UT of such theories. I suggest to rephrase this with the help of the notion of intelligibility. If scientists understand a theory, the theory is intelligible to them.” pg.593.
When I first think of the pragmatic virtues, I thought those are related to the application of theories such that these virtues are the distinction point of being a scientist vs. being an engineer. However, as I read through, I realized the author is actually mentioning a way of theory choice based upon the pragmatic understanding of theories.
- a clarification question/criticism:
“The fact that deductive reasoning—and accordingly deductive-nomological explanation—involves skill and judgment has two important implications. First, skills cannot be acquired from textbooks but only in practice, because they cannot be translated in explicit sets of rules. Accordingly, to possess a skill is to have ‘tacit knowledge’.” pg.589.
I somewhat agree with this statement, but the following explanations are not really supporting this idea. There are examples for implicit learning, unconscious and unintentional learning, internalizing rules and developing cognitive skills as a physical skill. However, the author then wrote the existence of such a mind is also problematic. So, I’m not clear on how skills can be developed excluding explicit sets of rules.
Idealization and the Aims of Science, Potochnik
- the most important distinction:
“Understanding is at once a cognitive state and an epistemic achievement. Because understanding is a cognitive state, it depends in part on the psychological characteristics of those who seek to understand.” pg.94.
The important distinction here is the statement of ‘who seek to understand’, which emphasize a certain level of human cognition. This is also what makes a person a scientist.
- a clarification question/criticism:
“But these idealizations are specific to their purposes. This requires focus on one particular scientific aim (at a time), and one particular deployment of that aim, to the exclusion of others.” pg.108.
I believe idealizations can be made to reach a general rule or law, excluding any specific cases. For example, ideal gas law we learned in high school is general enough to be applied in both near vacuum and high pressures. If we want to be specific, we may use a modified version of the equation by including relative humidity etc. Therefore, idealizations can aim a broader view of the topic instead of being so specific.
Scientific Explanations
Design explanation: determining the constraints on what can be alive, Arno G. Wouters
- the most important distinction:
“Yet, unlike accidental generalities and like causal relations, functional dependencies are in a sense physically necessary: an organism that has the dependent trait cannot be alive (or will be less viable) if it has the alternative trait instead of the needed one. In other words, a functional dependency is a constraint on what can be alive.” pg.75.
I think it is important for the philosophers’ perspective that there are boundary conditions and initial conditions for every physical system depending on their scale in space (dimensions in nm, um etc.) and their presence in time (e.g. how much time a spaceship needs to decelerate). Here, the author is making an important attempt to state a constraint to be alive by elucidating functional dependencies.
- a clarification question/criticism:
“They (functional dependence relations) are synchronic in the sense that the need must be satisfied at the time that the demand arises.” pg.75.
I understood this statement as the demand (more oxygen in blood system) and supply (lungs) should be available at the same time. However, evolution is nothing like a lightening. It is more like a process and it cannot be understood if we don’t consider the changes happening in the environmental system. For example, fish used gill in water and took the solvated oxygen in water. But, before then, there were not enough oxygen in the atmosphere for a very long time. First, oxygen had been released from the oxidized rocks, then oxygen species had resolved in oceans. During this time, there were only little planktons and bacteria. As the level of oxygen increased in oceans, evolution made a progress emerging fish with gills. My point is that the author never mentioned the effects of environment itself directing evolution and causing natural selection. The demand arises not only due to the creator’s needs, but also due to the changes in surrounding and this happens in a long time period.
Why Ask, “Why?”? An Inquiry Concerning Scientific Explanation, Wesley C. Salmon
- the most important distinction:
“Developments in twentieth‐century science should prepare us for the eventuality that some of our scientific explanations will have to be statistical—not merely because our knowledge is incomplete (as Laplace would have maintained), but rather because nature itself is inherently statistical.” pg.6.
I just wanted to say, ‘thank you!’ for this beautiful sentence. Actually, this sentence is also beating the deterministic view of Laplace and others. For example, statistical thermodynamics, which is the first step before diving into the particle’s world and quantum mechanics, has been empowered by Boltzmann, and he was so alone to defend his ideas and eventually he committed suicide. I’m still having trouble to understand why the acceptation of a statistical world required too much effort in human’s mind.
- a clarification question/criticism:
“The transmission of light from one place to another, and the motion of a material particle, are obvious examples of causal processes. The collision of two billiard balls, and the emission or absorption of a photon, are standard examples of causal interactions.” pg.8.
Transmission, emission and absorption are three modes of radiation acting on a surface, so those are basically light-electron interactions, although some say transmission is just a passing wave. I think, the separation of transmission to state it as a causal process is wrong in the sense of radiation. I wouldn’t distinguish causal process and causal interaction, as Hume suggested but I agree with the idea that cause and effect are more analogous to continuous processes which brings interactions into play.
Models and Representation
Models and Representation, I. G. Hughes
- the most important distinction:
“The requirement of empirical adequacy is thus the requirement that interpretation is the inverse of denotation.” pg. 333.
Overall, I’m impressed by this simple sentence because I can correlate this with the constructive empiricism of Van Fraassen, asserting the acceptance of a theory with the belief that it is empirically adequate. In my view, interpretation (in the sense of this reading) is mapping of a broader theory, which is postulated in the earlier stages of research by denotations. Demonstration step is related to the mathematical or material model, providing empirical adequacy as a bridge between denotation and interpretation.
- a clarification question/criticism:
“Galileo’s strategy is to take a problem in physics and represent it geometrically. The solution to the problem is then read off from the geometrical representation. In brief, he reaches his answer by changing the question; a problem in kinematics becomes a problem in geometry. This simple example suggests a very general account of theoretical representation. I call it the DDI account.” pg. 327.
I don’t think this example is a good start point to introduce a new account because it’s focusing on representation/denotation step mostly. For demonstration, he would mention the mathematical expression of this motion (x=1/2*a*t^2) or would show a result of an experiment (e.g. a car is moving from the city A to the city B).
Models and Fiction, Roman Frigg
- the most important distinction:
“What is missing in the structuralist conception is an analysis of the physical character of model systems. (…) If the Newtonian model system of sun and earth were real, it would consist of two spherical bodies with mass and other concrete properties such as hardness and color, properties that structures do not have; (…) “ pg.253.
I appreciate his distinction for the model systems to be real or hypothetical entities. It sounds a bit cheesy that the Newtonian model system should consist of real spherical bodies with hardness and color (?!). If the model system is designed to show the gravitational force between two celestial objects, then why do we care about their hardness or color? The model describes the force, not the solid objects.
- a clarification question/criticism:
“Hence, the essential difference between a fictional and non-fictional text lies in what we are supposed to do with it: a text of fiction invites us to imagine certain things while a report of fact leads us to believe what it says.” pg. 260.
So, what if we read a text from an unknown author on the weather predictions for the next 20 years? Let’s say he/she writes about the expected climate changes in South Bend area, and saying, ‘the Lake Michigan will evaporate quickly and will trigger tornados almost every week during spring and summer.’ How can you decide if this is a part of a horror novel or scientific fact? I think there should be more distinctive features in scientific texts, such as reliability, testability, fallibility, the power of its predictions, applicability etc. (Thanks Sir Karl, again!)
History and Philosophy of Science
– What is, according to Chang, the complementary function of history and philosophy of science?
- Philosophy and history together provide an organized skepticism and criticism towards the undiscovered and forgotten parts of science. He discussed that the eliminated scientific theories can be brought back to the scientists’ attention. Although he supported his ideas with acceptable examples, such as cold radiation, which I have zero intuition about it, I tend to believe more on the survival of the best explanations, as discussed in Van Fraassen. I couldn’t see the pragmatic of reconsidering old, forgotten, somehow falsified and yet weakly standing theories.
- I believe the survival of the best explanation and rediscovering an old theory are not contradictory. The explanations survive if they meet the needs of their time and pave the way for new predictions or discoveries, until they are falsified by new tests. In contemporary science, we can still discard the survived theories based on new findings, brought by new theories or tools enabled by recent technologies. Here, we can go back to an old theory, satisfying the conditions we’re dealing with. However, this old theory now holds because of new technologies and advanced techniques (i.e. computational models) . Now, that old theory transforms into the best explanation surviving. In the case of heliocentric theory, people took it into consideration again because of the enhancement in observations, simply telescopes. Therefore, I don’t see the necessity to go back in history and find a theory in dusty shelves, if we already have a valid and working model/theory.
– What is a problematic in Pitt’s article?
- A problematic is a set of intellectual concerns required to be explored by scientists. They can be found in a context with relations, but they can evolve and change in time, creating their own history. That’s why we should admit that they can be considered in multiple contexts and we must avoid choosing the appropriate context to support our ideas.
Underdetermination in Science
– Do you agree with Duhem that there cannot be crucial experiments? Can you make an example?
- I think there might be crucial experiments, but they do not guarantee the full description or explanation of a phenomenon. I want to give an example from my field of research (plasma catalysis). The catalytic reactions have been studied for years to increase the production rate of valuable products. Once we combine those reactions with some plasma applications, we observed an enhanced conversion of input gas flow (for example in the production of ammonia by nitrogen and hydrogen). Some thought plasma is affecting the catalytic materials, while the others believe the material is making some changes on plasma properties. Our group showed in the first experiment that there is no change in the macroscopic properties of plasma. Then, we designed an experiment with some expensive tools to see if there is any change on the catalytic material due to plasma. So, the second experiment seems so crucial to us to end the discussion on what is responsible for the enhancement. However, as I stated before, we didn’t probe the plasma in atomic level, it was a lumped study, which may cover the possible changes on plasma. In Duhem’s sense, there is no crucial experiments, but I believe an experiment might be still crucial, if its result will provide an answer to strengthen a possible explanation.
– In your opinion, what are the consequences for the rationality of science if we accept that theory choice does not work as an algorithm but it is influenced by values?
- As Kuhn suggested, the choices scientists rely on are affected by objective -shared- criteria as well as subjective factors. The algorithm is the part of objective process, mostly, because it returns a result based on predefined rules. However, this algorithm also requires some input to start with, which makes ‘the scientific algorithm’ shaky. The input information may come from the previous theories or experiments, that those are possibly influenced by the tradition or even ‘the spirit of the time’, if we think of geocentric theory as an example. On the other hand, science is still reliable and rational due to its testability and fallibility (yes, I love Popper). The geocentric theory was applicable to how stones fall, how water pumps function etc., but it required a stationary Earth. Many astronomers questioned this: “Ok, we observed other planets moving around, then why our lovely Earth is stationary?”. Because, they were not able to understand the role of gravity in a way that keeping the planets in their orbits and preventing people and atmosphere to fly away, till someone called Newton proved mathematically by inventing differential calculus. Is this theory testable, even though he just showed mystical mathematical expressions? Absolutely yes! Look around! You can easily see that ocean is moving back and forth at coasts due to tidal force between Earth and Moon, exactly how Newton describes. So, that is highly possible that theory choice is a combination of values and criteria, but once we accept that point of view, we’re also framing an unshakable stage for science itself and questioning its rationality is out of game, I guess.
Recent Comments