Piyush Marmat

My Unwarranted Notes on Philosophy

Last updated on:
"For my own part, I would rather excel in knowledge of the highest secrets of philosophy than in arms." — Alexander the Great
"For my own part, I can do philosophy my own way." — Me the not Great

What is Philosophy?

Philosophy is the general inquiry into all things, which are of two kinds: real things (entities) and abstract things (ideas). The totality of all things is called the world. The totality of all entities is called reality. An entity is said to exist if it is instantiated in reality. Entities exist independently of any mind. The totality of all ideas is called the abstract world. An idea is said to be formed if it is instantiated in a mind. Ideas can be objective in the sense that different minds can form the same ideas, but they do not form independently of minds. But what is a mind? A mind is that which can form and retain ideas.

Metaphysics
Metaphysics is the branch of philosophy that inquires into the fundamental nature of reality. Its sub-branch that studies existence itself is called ontology. Its core question is: "Which entities exist?" In ontology, a specific sub-branch called mereology is there to establish what is part and what is whole. A part is an entity that can combine with at least one other entity to form a whole. A whole is an entity that has at least one part. An ontological simple is an entity without parts. Whether simples exist is an open mereological question. Composition refers to which parts a whole has and how they combine. A composite entity can be described in terms of entities that are its parts. This process is called ontological reduction. Some composites have emergent properties. They are properties that are not found in any part alone. Existence at the composite level depends on the existence of its parts but is described at the level of the whole. The need to define any composite entities entirely depends on redundancy and usage. We do not invent names for all conceivable compositions but only for those that serves some purpose. The remaining branches of metaphysics (often collectively called natural philosophy) inquire into the properties and relations of existing entities.

Philosophical agents
An agent is an entity that can bring about effects in the real world. Philosophical inquiry is carried out by philosophical agents. A philosophical agent is an agent that can form ideas about things, their properties, and relations between them. Hence, philosophical agents are mind-bearing agents. Properties are abstract things that describe or characterize other things. Relations are descriptions of interactions or connections between two or more things. From now on, philosophical agents are called agents for brevity. Objects are entities other than agents.

All agents observed so far have been living entities. They are alive and moreover they are sentient beings. To understand philosophical agents, we must first define life, then describe how experience and reasoning arise, and finally relate this to things.

An entity is alive if it is naturally assembled and maintains its organization and functionality through continuous interactions with its environment. It must have a definite boundary separating it from its environment. It must acquire, transform, and incorporate external resources to replace decayed or damaged components, thereby preserving its internal structure and operational capacity. This ongoing self-maintenance defines its physical identity. Even during dormancy, the entity must have a latent potential to resume self-sustaining processes when conditions allow. If this potential is irreversibly lost, the entity is dead. While reproduction is not strictly necessary for life, living entities typically have the capacity to reproduce either by generating independent copies or by replacing components incrementally.

For a living entity to be a philosophical agent, it must be sentient. Sentience is the natural ability to perceive and model an internal representation of the existence of the self (self-awareness). Not every living entity is sentient. Sentience requires mechanisms to process inputs from the environment, store information, and form ideas based on this information. Reasoning is the capacity to examine and relate ideas to each other. Experience is the continuous formation and updating of ideas based on interaction with the environment. An agent can sense its environment and then form ideas about the existence, properties, and relations of things, including itself. This forms the Qualia of that agent. Although sentience provides the capacity to think, it does not guarantee inquiry-oriented or reflective thoughs. As living beings evolve sufficiently complex nervous systems, like in humans, they gain the ability to philosophize.

Beliefs and Epistemology

Experience and reasoning are processes through which an agent forms beliefs. A belief is an idea held by an agent about a thing, its existence, properties, or relations.

A belief is qualified to be true or be called a truth in either of the two independent ways:

1. Coherence (a priori, reasoned truth or inference) - A belief is coherent if it is inferred from other beliefs without contradiction. To contradict is to imply that for the inferred belief to be true, it requires the starting belief (which was assumed to be true) to be not true or false. Contradiction renders the assumed beliefs to be necessarily false. Beliefs are coherent only when they do not contradict themselves or other held truths. Inference is the process by which an agent forms new beliefs from previously held beliefs using reasoning.

2. Correspondence (a posteriori, empirical truth or fact) - A belief corresponds to reality if it accurately describes an entity as it exists. Beliefs that fail to correspond are false. A belief must be coherent to qualify for correspondence, since incoherent beliefs are false already. But coherence alone does not establish correspondence, some justification is required.

Knowledge is belief that is justified (demonstrated or proven to be true). Epistemology (theory of knowledge) is the study of justification of beliefs. The goal of epistemology is to decide what beliefs are regarded as knowledge.

Epistemology is hence the branch of philosophy that focuses over what and how an agent can know about anything either by experience (a posteriori) or by reasoning alone (a priori).

A posteriori knowledge - Beliefs that are conceived through experience and justified to be true only by observation or an empirical evidence. These are correspondent truths or facts.

A priori knowledge - Beliefs that are conceived through reason and justified to be true without any need of observation or an empirical evidence. These are coherent truths.

If the claim “there are no facts” is true, then the content of the claim itself (assertion about reality) must hold. But asserting it makes it a fact, because it would then correctly describes reality. This creates a contradiction, and therefore the claim "there are no facts" is false. This implies that facts are descriptions of reality that are necessarily true but may not be known to an agent.

Possibility, Necessity, Causality
Entities whose non-existence is self-contradictory necessarily exist. Entities whose existence is self-contradictory cannot exist under any circumstances. Entities whose existence contradicts other necessary entities cannot exist. Entities whose existence is neither necessary, self-contradictory, nor contradictory to necessary entities are possibly existent. The existence of possible entities depends on causal conditions. If something exists that could have not existed, or fails to exist while it could have, then causes are required for its actualization or non-actualization. Everything that exists or has existed must therefore have been caused by something. This causal chain may regress indefinitely or terminate at an uncaused ultimate cause. The non-existence of an ultimate cause is not self-contradictory, so an uncaused cause is possible, but not necessary. Similarly, the non-existence of an infinite causal regress is possible, but not necessary. Whether every entity has a cause or whether the causal chain terminates is an open question in metaphysics.

Reason and Logic

The branch of philosophy that studies coherent reasoning is called Logic. The goal of logic is to define the coherent rules of inference so that an agent can construct coherent beliefs. Coherent reasoning is important because it is a necessary requirement for the beliefs to hold any meaningful description. This requirement of coherency in logic is called the principle of non-contradiction which says that a belief cannot be true and false at once. As long as we presuppose this principle, our inferred beliefs are coherent. Aristotle showed that if we assume that a belief can be both true and false at the same time then this assumption itself can be both true and false at the same time. This assumption therefore contradicts itself, implying its own negation, where any belief cannot be both true and false at the same time. Which finally affirms that principle of non-contradiction. It is important to note that this principle does not forbid the truth value of a belief to be unknown. An agent may not know the truth value yet but the belief cannot be true and false both at the same time.

Logic requires a system called language to express and organize beliefs. A system is a collection of things together with relations between them, treated as a unified whole for the purpose of description or reasoning. A proposition is a belief expressed in a language with sufficient internal structure to determine its truth conditions. Here, internal structure refers to syntax, which consists of symbols and rules for combining them, and semantics, which assigns meanings or interpretations to those symbols.

Axiology, Aesthetics, and Ethics

But why an agent holds beliefs? Because beliefs can transform into values to ultimately influence the agent's actions. A value is a belief about what an agent treats as desirable or worthwhile. A value system is a system of values and relations between them. Values are of two kinds: moral values and non-moral values. A moral value is a value that classifies actions as obligatory or forbidden. A non-moral value is a value that classifies actions as allowed but not required. Allowed means not forbidden. A choice is an action an agent can select when more than one action is possible to that agent. Moral and non-moral values can overlap. An action may be both allowed by a non-moral value and required or rejected by a moral value. Agents that can make choices based on moral values are moral agents. Axiology is the branch of philosophy for inquiry into value systems. Ethics is the inquiry into moral values and moral choices. Aesthetics is the inquiry into non-moral values and evaluations of things or experiences. Ethics and aesthetics are the sub-branches of axiology.

Ethics presupposes choice, because moral classification applies only when an agent can select actions. A moral claim of the form “an agent ought to do X” implies that the agent can choose X. If the agent cannot choose X, the moral claim cannot apply. Kant described this fact as "Ought implies can." Which means "If you are ought to do it then first you must be able to do it." Other than this, there is the "is-ought" distinction forwarded by Hume which states that moral claims or prescriptions (what is ought to be) cannot be inferred from descriptive claims (what is) because prescriptions are completely dependent on the choice of value system (which is not necessarily unique) and hence are not descriptions or facts. Understanding value systems therefore requires examining agents, their values, and their capacity for choice.

A state of experience is an agent’s interaction with the world that produces a belief in the agent about that interaction. Two basic classifications of state of experience used in ethics are suffering and pleasure, which are beliefs labeling experiences as undesirable or desirable relative to moral values. These labels are value-dependent, not facts. Axiology depends on epistemology and logic because values are beliefs, and beliefs require coherence rules and justification rules to be examined.

Philosophical Arguments

Philosophical inquiry is carried out through arguments, which express and test reasoning applied to beliefs. The beliefs from which reasoning begins are premises. The belief an argument aims to justify is the conclusion. An argument is a structured system of premises offered as justification for a conclusion. Arguments provide justification, and justification is required for beliefs to qualify as knowledge. A deductive argument is an argument in which the premises, if true, make the conclusion true by logical necessity. Deductive arguments preserve truth and are the only arguments that can be valid or invalid. A deductive argument is valid if it preserves truth by necessity. A valid deductive argument is sound if its premises are true, otherwise it is unsound. An invalid deductive argument fails to preserve truth necessarily and is a deductive fallacy. Deductive arguments justify a priori knowledge, because their justification does not rely on observation. An ampliative argument is an argument in which the conclusion contains content not made true by logical necessity from the premises alone. Its conclusion is not guaranteed by truth preservation, but extends beyond the information given. Ampliative arguments justify a posteriori knowledge, because their justification depends on observation or evidence. Ampliative arguments are of two kinds: inductive and abductive. An inductive argument generalizes from observed cases to unobserved cases without necessity. An abductive argument proposes a belief as the best available explanation without necessity.

A table highlighting different types of arguments, given by Peirce

Abduction Deduction Induction
Case from Rule and Result Result from Rule and Case Rule from Case and Result
Rule (first principle): All the beans in this bag are white Rule (first principle): All the beans in this bag are white Case (hypothesis): These beans are from this bag
Result (conclusion): These beans are white Case (hypothesis): These beans are from this bag Result (conclusion): These beans are white
Case (hypothesis): These beans are from this bag. (The beans were taken out of this bag) Result (conclusion): These beans are white. (These beans that we have now are white) Rule (first principle) All the beans in this bag are white. (all the beans taken out will be white)

Only deductive argument guarantees the truth of the conclusion. Abductive and inductive arguments are trial fits to make the deduction work. The idea of non-deductive or ampliative arguments introduces the concept of probability (how likely the conclusion is). The probable the conclusions, the stronger the ampliative argument. If the premises are in fact true then such strong arguments are called cogent arguments.

In abduction, we seek a probable case that makes the conclusion deductively true given the rule is true.

In induction, we seek a probable rule that makes the conclusion deductively true given the case is true.

In a way, abduction is about finding what probably happened (finding the case that fits the rule), and induction is about what probably will happen (as the rule dictates it). Deduction is finding what happens.

Relative to a deductive conclusion, the premises and argument involved are called its proof. Based on how we proceed to supply proofs, we find the following problem:

Münchhausen Trilemma

There are only three ways of absolutely completing an argument:
1. The circular argument - in which the proof of some proposition presupposes the truth of that very proposition. The problem is that it is not an argument but rather some elaborate rephrasing of the proposition to be proven as its proof.
2. The regressive argument - in which each proof requires a further proof, and so on. The problem is here the unending nature of such arguments. It is commonly described as the unending "why?" questionnaire.
3. The dogmatic argument - which rests on accepted propositions that are merely asserted rather than proven. Hence simply rejects to be an argument at all.

Popper suggested that one must accept this trilemma as unsolvable and construct knowledge by the method of conjecture and criticism. Why? Because all knowledge comes from two basic paths. One is deduction: if A implies B, then knowing A gives B with certainty. The other is checking for contradictions. If two claims A and B are found to contradict each other, one or both must be false. Observing A and B together can suggest they are compatible, but this never implies necessity. Empirical evidence can only support or challenge claims, not establish logical necessity. Outside deduction, inquiry is always provisional: we can rule out what is impossible, but we can never be certain of what is merely consistent.

Science and Research

The process of conjecture and criticism, suggested by Popper, underlies the methodological core of science. Science is a specialised method of philosophical inquiry under specific constraints called the "scientific method". A method of inquiry that involves conjecture which is provisional knowledge based on empirical evidence and criticism or experimental testing in general to assure consistency and correspondence, is called science. The goal of science is to create a description of any system (the part of the world that is being studied) by establishing its properties and the relations among its properties by reason, experience or both. At any instant, such a description is called a scientific theory. Scientific knowledge can hence be defined as coherent true beliefs that withstand repeated scrutiny and revision. The apparently non-scientific fields like humanities and social studies (law, politics, history, etc.) do contain features of the scientific method since the goal is still to describe and make inferences. Depending on the system, scientific studies are of two types: empirical and formal.

Formal Science

Formal sciences involve developing theories of abstract systems that may model some general or idealised features of real world or natural systems. The axioms (fundamental premises that are assumed to be true) are set in such a way that the deductions lead to useful or at least interesting results (theorems) that are universal in application and deductively valid. Logic, Statistics, Computer Science are all examples of formal science. Hence formal scientific theories are exact relative to their axioms and can be revised when axioms are changed or reinterpreted. Theorems are the propositions deduced from axioms or already deduced theorems. The argument presented to verify the deduction of a theorem is called proof of that theorem. The systematic study of the deductive consequences of axioms within formal systems is called Mathematics. Mathematics provides the general framework, and formal sciences instantiate it in domain-specific ways. A system of axioms is called:

Complete: if every true theorem must have a proof then the system of axioms is complete.

Incomplete: If you can't prove some theorem to be true or false using some system of axioms, then that system of axioms is incomplete.

Decidable: if some algorithm (standard method) can be formed so that whether the statement has followed from an axiom can be checked then the system is decidable.

Consistent: If it is free of contradiction. Being able to prove something but not its opposite (not converse but inverse). Basically if I can prove some statements to be true from my axioms, I should not be able to prove the same statement to be false, otherwise my system of axioms is inconsistent.

In classical logic, where every proposition is either true or false (sometimes called as the law of excluded middle), every inconsistent system of axioms is trivially complete. This means such an inconsistent system can prove both a theorem and its negation, and by the principle of explosion, it can prove any proposition to be true or false. Because of this “garbage in, garbage out” effect, a useful axiomatic system must be consistent. While it may not be complete, consistency is essential for the system to be meaningful and useful.

Empirical Science

In the case of empirical sciences, (where the system is a part of reality. For example: Physics, Biology, Social Science, etc.) the theory does not "exactly" describe the system but a model of the system which is an approximation of it. In empirical sciences, every observation of a system may give us a posteriori knowledge of some properties of the system. The model has to be developed such that it explains all the known properties. But such a model is not complete; it has to predict new observable properties. If such predictions are observed then such observations become an evidence for the theory. If observations are contrary to the predictions then the theory is revised or discarded. This feature of theories in empirical sciences is called falsifiability. Falsification enables us to test if the theory is empirical in nature or not. This process has to be repeated in the search of finer theories and this process is called scientific research.

The metaphysical basis of empirical science
Empirical science begins with the description of events. All observations are ultimately observations of events. An event is a change from one physical state to another. Events presuppose change. Change implies a difference between states: something is not the same at one time as it is at another or we can say that something happens at a particular place and time. This necessarily entails time, space, and material undergoing motion or transformation. A physical state is defined by material configuration in space at a time. An event is characterized by three fundamental questions: What happens, Where, and When. This defines the fundamental program of scientific observations. Theoretical science then try to construct models to answer the previous ones and also an extra question: How events occur which is tested with scientific experiments.

The most basic observable change is the change in position over time called motion. Motion is thus the minimal empirical content of reality. Mechanics, the study of motion, forms the foundation of physics. Without motion, there are no events to explain; without mechanics, physics does not begin.

Hence empirical science implicitly presupposes the following metaphysical properties:

  1. Materiality: Physical systems consist of material, which is the subject of change.
  2. Spatiality: Material exists and is localized in space. It constitutes (or give subtance to) all existing entities.
  3. Temporality: Events occur in time, and time implies ordering or a direction of happening.

Observations in Empirical Science

How do observations of a system give us knowledge of its properties?
Observations are possible because we possess the ability to sense our environment. The raw outcomes of these senses form the experience. Measurement is the process of transforming observations into objective, recordable and communicable description of any property of a system. A measured property is a constructed property: It is a label we use to describe certain aspects of a system that consistently correlate with our sensory experiences. These labels gain meaning through interaction and comparison. Every measured property is hence a record of a sensible effect arising from interaction of the observer with the system. Because of these interactions, measurements are inherently comparative. Measurement is either categorical or numerical. Unlike categorical measurements (assigning a category value to a measurement), in numerical measurements we have to define reference values to standardize measurements. These reference values are called units.

What is a relevant observational evidence?
A priori propositions are either definitions or deductions, but a posteriori propositions come with the need for observational evidence to justify them. For example, the proposition "420 is an even number" is a priori proposition, and if you know the definitions of "420" and "even number," deducing the truth of the proposition is not very difficult. Now consider "All ravens are black." The claim is clearly a posteriori and impossible to justify as a formal proof but rather can only be accepted as a provisional truth unless falsified by observing a non-black raven.

But a paradox arises under the framework of classical logic: Since "All raves are black" can be rephrased as "If something is raven then it must be black" therefore by contraposition rule in classical logic, it is equivalent to "If something is non black then it must be non-raven". This equivalence holds because classical logic equates propositions solely based on their truth values irrespective of the contents of the propositions. While a black-raven is a proper direct evidence in support of the claim "All ravens are black", something like a red robin or green lantern is direct an evidence to "If something is non black then it must be non-raven". This makes a red robin or anything non-black and non-raven allegedly an evidence for "All ravens are black". This is the raven paradox.

Keeping in mind that bad results often come from bad assumptions, it is reasonable to doubt the usage of classical logic here. Empirical sciences in which propositions are often a posteriori, require an evidence based reasoning which classical logic cannot handle. In classical logic, because of the assumed law of excluded middle, propositions like "Either there is a black raven or there isn’t" is valid even if you’ve never seen a raven. Inference in classical logic operates by assuming a proposition either true or false and then checking the validity of any compound proposition. In constructive logic, instead of the law of excluded middle, the truth values of propositions are based on proofs (for a priori propositions) and evidences (for empirical propositions). Hence the contraposition rule does not apply and no raven paradox arises in constructive logic. Any claim regarding a raven requires an evidence that ravens exists first. While we cannot provide an absolute evidence for a claim like "All ravens are black", we can still conclude that a red robin is not a relevant evidence for the claim. It is worth noting that every proposition provable constructively is also valid classically. But not all classically valid propositions are provable constructively.

How to have trust in measurements?
The result of an experiment may have several measurements and it becomes naturally crucial to describe the quality of a measurement. To quantify the quality of any measurement we must report both the best estimate and the uncertainty about it.

Precision is the degree of agreement or consistency (reproducibility) among multiple measurements of the same property. Lack of precision is termed as uncertainty and numerically it is the degree of the spread of values we measured for a given property. More is the precision less is the uncertainty and vice versa.

An instrument measures a property numerically. Let’s say it shows 010.1040. Since here the 0 (left to 1 before decimal) is not a significant digit, we safely re-write it to be 10.1040. Why are we keeping the rightmost 0? Because it is significant. It is obvious that non-zero digits are significant but the significance of zeros between nonzero digits and trailing zeros have a little more for us to understand. These zeros are telling us that there is 0 contribution of that particular digit place (power of 10) and having such shows the finest possible measurement that can be made with that instrument, often called the least count which also serves as a measure of precision for the instrument. For the above example, the least count is 0.0001 because the measurement is possible up to the fourth decimal place.

How exactly digits and precision are related?
In general, n number of digits can represent 10^n distinct values. Hence adding an extra digit will enable us to represent ten times more than the previous. Consider that I measure something to be 2.01 and then later as 2.02 then I am uncertain about the digit at the third decimal place. Having the least count improved from 0.01 to 0.001, I have an extra digit that can represent 10 values in the range between 2.01 and 2.02. This makes my precision to increase 10 folds. Adding one more digit, will improve the least count to 0.0001 and now I can represent 100 values in the same range which again 10 folds the precision. As a rule, we can infer that in base m number system, adding an extra m-bit, increases the precision m-folds.

This is very interesting in the case of computers, where having one extra bit to represent numbers, increases precision by two times. So extra 1 Byte (8 bits) increases precision by 2^8=256 times. Landauer found that you need a minimum amount of energy to cause thermal fluctuations somewhere in the electronic circuits to destroy at least one bit of information and as we discussed above it will lead to loss of precision by half.

Accuracy is the closeness of agreement between a measured value and a true or accepted value (precise results that most experiments have consistently given). Measurement error is the amount of inaccuracy and is defined as the difference between the measured value and the ‘true value’ of the thing being measured. Hence a measurement is only meaningful when we have enough knowledge of the errors and uncertainty involved. It is possible that people use the words “error” and “uncertainty” interchangeably but error values can be any real number (since it is a difference between two real numbers) and uncertainty is always a non-negative real number since it is a spread.

A measured result agrees with a theoretical prediction if the prediction lies within the range of experimental uncertainty. If two measured values have standard uncertainty ranges that overlap, then the measurements are said to be consistent (they agree). If the uncertainty ranges do not overlap, then the measurements are said to be discrepant (they do not agree).

Science requires experiments. Experiments give measurements. Recorded measurements are called data. Statistics is the formal science where we develop method for collection, analysis, interpretation, presentation, and organization of data. Probability theory is the mathematical framework for quantifying uncertainty. Statistics uses probability theory to make statistical inferences. Statistical inference is the process of drawing conclusions from incomplete, indirect, or uncertain information in any experimental data.

Probability - The mathematics of ignorance
In probability theory, an experiment is any procedure that yields a well-defined result. A trial is a single execution of the experiment. The outcome of a trial is the specific result obtained, also called a sample point. The set of all possible outcomes is called the sample space. An event is a subset of the sample space, consisting of one or more outcomes that satisfy a particular condition of interest. To quantify the probability of an event say E, denoted as P(E) we can either:

1. Run real experiments (frequentist approach) in which the true probability P(E) is the limiting case of empirical probability which is the ratio of the number of times the event E occurs to the number of trials say N, as the number of trials approaches infinity. It is assumed that each trial doesn’t affect the others and the probability does not change with time. These assumptions are necessary for convergence and together it is called the law of large numbers.

2. Or construct a probability distribution model (theoretical approach, some may call it Bayesian) where we define probabilities as fractions going from 0 (certainly not happening) to 1 (certainly happening) for each outcome (this is the probability distribution function that maps outcomes to a real number in the closed range from 0 to 1) with the condition that the sum of all these probabilities has to be 1 since the sample space contain all possible outcomes and it is certain that at least one outcome out of all will happen in any trial. P(E) now becomes the sum of all the probabilities of outcomes that belong to the subset of event E. This theoretical approach is verified or tested against the observed probabilities and updated to capture the patterns in an experiment. It is therefore better than the frequentist approach. Important to note that if an event is certain (only one event in sample space) then we assign the probability as 1 and similarly assign 0 if it is impossible. But if the probability is 1 that does not imply that the event is certain and similarly probability being 0 does not imply the event being impossible. Only in a finite, atomic probability space where every outcome has positive probability do probability 1/0 correspond exactly to necessary/impossible events.

The best estimate is the expectation value of the probability distribution that models the data. The uncertainty is the standard deviation of the mean, also called standard error, which is the standard deviation divided by the square root of the number of measurements made. Standard deviation is the square root of variance. Variance is the expectation value of squares of data values minus the square of expectation value of data. Expectation value of a variable x with probability distribution P(x) is the probability weighted average of all the values of x over the entire sample space.


How does philosophy make progress?
Philosophy historically contributed to the formation of systematic human inquiry, including logic, mathematics, ethics, and natural philosophy, from which the sciences later specialized. Science presupposes philosophy in its very structure: it relies on logic to reason from evidence, epistemology to justify knowledge claims, metaphysics to assume the existence of a mind-independent reality, and ethics to govern experimentation. Philosophy is not only about what you believe. It is about what you can justify under what assumptions. It does not converge on final answers because many of its questions concern the structure of inquiry itself. Expecting any ultimate answer there is irrational. Methodology, inference, and interpretation are applications of philosophy. Philosophy progresses by making more precise questions and stronger arguments.