Yes, that’s correct! Interacting with the barrier that creates the slits we don’t care about, but yes, that collapses it too.
Interacting with the surface we’re measuring in all the experiments. It doesn’t change, so it shouldn’t be effecting the results. It does collapse the waveform though, which is how we measure it.
Detecting it at the slit is the part that changes. If we don’t do this, we get wave-like behavior, because there’s no interaction until it hits the surface at the end. The wave can pass through both slits without any interaction. If we put in a detector, then it must interact with that to pass through, so it collapses the waveform and behaves like a particle at that point. This means it must be at one slit or the other, and not both.
(Looking at the image up there they have bars placed on the facing side of the slits there, is that the detector they were referring to when they said ‘monitoring’ it in this case?)
Interacting with the barrier that creates the slits we don’t care about, but yes, that collapses it too.
Ok, I see you’re ignorant actually. Interactions do not lead to the collapse, they are intrinsic part of quantum fields. Collapse happens when you step out of quantum picture with (mostly)linear equations and try to project the calculations onto the “classical picture”, whatever your cult of choice explains how that’s actually happening.
Yeah… no. There are multiple interpretations, but basically it’s when position is needed to be known that causes it. Until then, the position is in a superstate of all possible positions, but for an interaction to occur it needs to be in one position. It’s not about choice. It’s about when information is needed for a physical interaction to occur. If one occurs then the particle must be at that location.
Collapse happens when you step out of quantum picture with (mostly)linear equations and try to project the calculations onto the “classical picture”
This (at least your wording) implies that physics cares about our mathematical models. It doesn’t. Quantum mechanics and “classical” physics are just ways we organize things for education. Though we don’t have a model for it, the unvirse is not using two separate models of physics. There is no “quantum mechanics” and “classical physics”. There is only physics. When a measurement occurs the universe isn’t looking at it to see if it should use quantum rules or classical rules. The interaction just occurs.
Value indefiniteness is just solipsism. If particles do not have values when you are not looking, then any object made of particles also do not have values when you are not looking. This was the point of Schrodinger’s “cat” thought experiment. Your beliefs about the microworld inherently have implications for the macroworld. If particles don’t exist when you’re not looking at them, then neither do cats, or other people. This view of “value indefiniteness” you are trying to defend is indefensible because it is literally solipsism and any attempt to promote it above solipsism will just become incoherent.
You say:
it’s when position is needed to be known that causes it. Until then, the position is in a superstate of all possible positions, but for an interaction to occur it needs to be in one position.
This is trivially false, because then it would not be possible for two particles to become entangled on the position basis, which requires them to interact in such a way that depends upon their position values. The other particle would thus need to “know” its position value to become entangled with it, and if this leads to a “collapse,” then such entanglement could not occur. Yet we know it can occur in experiments.
If by “know” you mean humans knowing and not other particles, yeah, okay, but that’s obviously solipsism.
Any attempt to defend value indefiniteness will always either amount to:
Solipsism
Something that is trivially wrong
A theory which is not quantum mechanics (makes different predictions)
This (at least your wording) implies that physics cares about our mathematical models. It doesn’t. Quantum mechanics and “classical” physics are just ways we organize things for education.
I don’t blame them, it is literally the textbook Dirac-von Neumann axioms. That is how it is taught in schools, even though it is obviously incoherent. You are taught that there is a “Heisenberg cut” between the quantum and classical world, with no explanation of how this occurs.
Though we don’t have a model for it, the unvirse is not using two separate models of physics. There is no “quantum mechanics” and “classical physics”. There is only physics.
The problem is that the orthodox interpretation of quantum mechanics does not even allow you to derive classical physics minus gravity in a limiting case from quantum mechanics. It is not even a physical theory of nature at all.
We know from the macroscopic world that particles have real observable properties, yet value indefiniteness denies that they have real observable properties, and it provides no method of telling you when those real, observable properties are added back to the world. It thus cannot make a single empirical prediction at all without this sleight-of-hand where they just say, as a matter of axiom in the Dirac-von Neumann textbook axioms of quantum mechanics that it happens “at measurement.”
If measurement is taken to be a subjective observation, then it is just solipsism. If measurement is taken to be a physical process, then it cannot reproduce the mathematical predictions of quantum mechanics, because this “Heisenberg cut” would be a non-reversible process, yet all unitary evolution operators are reversible. Hence, any model which includes a rigorous definition of “measurement” (like Ghirardi–Rimini–Weber theory) would include an additional non-reversible process. You could then just imagine setting up an experiment where this process would occur and then try to reverse it. The mathematics of quantum mechanics and your theory would inevitably lead to different predictions in such a process.
Therefore, again, if you believe in value indefiniteness, then you either (1) are a solipsist, (2) don’t believe in quantum mechanics but think it will be replaced by a physical collapse model, or (3) are confused.
The only way for quantum mechanics to be self-consistent is to reject value indefiniteness, at least as a metaphysical point of view. This does not require actually modifying the mathematics. If nature is random, then of course the definite values will evolve statistically such that they could not be tracked and included in the model. All you would need to then demonstrate is that quantum statistics converges to classical statistics in a limiting case on macroscopic scales, which is achieved by the theory of decoherence.
But the theory of decoherence achieves nothing if you believe in value indefiniteness, because if you believe quantum mechanics has nothing to do with statistics at all, then there is no reason to conclude that what you get in the reduced density matrices after you trace out the environment has anything to do with classical statistics, either.
There is no good argument in the academic literature for value indefiniteness. It is an incoherent worldview based on no empirical evidence at all. People who believe it often just regurgitate mindlessly statements like “Bell’s theorem proves it!” yet cannot articulate what Bell’s theorem even is or how on earth is proves that, especially since Bell himself was the biggest critic of value indefiniteness yet wrote the damned theorem!
Value indefiniteness is just solipsism. If particles do not have values when you are not looking, then any object made of particles also do not have values when you are not looking.
They do have values. Their position is just a superposition, rather than one descrete one, which can be described as a wave. Their value is effectively a wave until it’s needed to be discrete.
This was the point of Schrodinger’s “cat” thought experiment.
Sure. That doesn’t make the general understanding of the thought experiment accurate. Once the decay of the atom that triggers the poison is detected, it’s no longer in a superposition. It has to not be in order for the detection to occur. The thought experiment is a meme because it’s absurd, and it is. That’s only because the entire premise is fundamentally flawed. It can’t exist as it’s implied. Also, even if this weren’t the case, that doesn’t actually prove it wrong. The double slit experiment shows that an interaction can change the result from wave-like to particle-like behavior.
This view of “value indefiniteness” you are trying to defend is indefensible because it is literally solipsism and any attempt to promote it above solipsism will just become incoherent.
I’m literally not. My entire point is that it isn’t a solipsism. Any interaction causes the waveform to collapse. Not a person observing it. The universe doesn’t care about what we describe as consciousness (or sapience, as it’s better described). It just does physics. The fact we don’t have a model for it doesn’t change anything.
This experiment shows that behavior can change just from a measurement. How do you explain that while also not allowing superpositions? You make claims about this meaning a few things (which I don’t agree with), and yet you give no explanation of an alternative. Something is happening. How do you explain it?
They do have values. Their position is just a superposition, rather than one descrete one, which can be described as a wave. Their value is effectively a wave until it’s needed to be discrete.
To quote Dmitry Blokhintsev: “This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.”
When I say “real values” I do not mean pure abstract mathematics. We do not live in a Platonic realm. The mathematics are just a tool for predicting what we observe in the real world. Don’t confuse the map for the territory. The abstract wave has no observable properties, it is pure mathematics. If the whole world was just one giant wave in Hilbert space, then this would be equivalent to claiming that the entire world is just one big mathematical function without any observable properties at all, which obviously makes no sense as we can clearly observe the world.
To quote Rovelli: “The gigantic, universal ψ wave that contains all the possible worlds is like Hegel’s dark night in which all cows are black: it does not account, per se, for the phenomenological reality that we actually observe. In order to describe the phenomena that we observe, other mathematical elements are needed besides ψ: the individual variables, like X and P, that we use to describe the world.”
Again, as I said in my first comment, any mathematical theory that describes the world needs to, at some point, include symbols which directly refer to something we can observe. An abstract mathematical function contains no such symbols. If you really believe that particles transform into purely mathematical waves, then you need some process to transform them back, or else you cannot explain what we observe at all, and so far the only process you have put forward is “it happens at every interaction” which is just objectively and empirically wrong because then entanglement would be impossible.
This is why you run into contradictions like the “Wigner’s friend” paradox where Wigner would describe his friend in a superposition of states, and if you believe that this literally means that all that exists inside the room is an abstract function, then you cannot explain how the observer in the room can perceive anything that they later claim they do, because there would be no observables inside of the room.
You cannot get around criticisms of solipsism by just promoting purely abstract mathematical entities to being “objective reality” as if objects transform into purely Platonic mathematical functions. At least, if you are going to claim this, then you need some rigorous process to transform them back into something that is described with mathematical language where some of the symbols refer to something we can actually observe such that we can then explain how it is that we can observe it to have the properties that it does when we look at it.
Sure. That doesn’t make the general understanding of the thought experiment accurate. Once the decay of the atom that triggers the poison is detected, it’s no longer in a superposition. It has to not be in order for the detection to occur.
Please scroll up and read my actual comment. You seem to have skipped all the important technical bits, because you are claiming something which is mathematically incompatible with the predictions of quantum mechanics. Your personal self-theory you are inventing here literally would render entanglement impossible.
The double slit experiment shows that an interaction can change the result from wave-like to particle-like behavior.
Decoherence is not relevant here. Decoherence theory works like this:
Assume that the system+environment become entangled.
Assume that the observer loses track of the environment.
Trace out the the environment.
This leaves you with a reduced density matrix for the system where the coherence terms have dropped to 0.
Notice that step #2 is entirely subjective. We are just assuming that the observer has lost track of the environment in terms of their subjective epistemic access, and step #3 is then akin to statistically marginalizing over the environment in order to then remove it from consideration.
This isn’t an actual physical transition but an epistemic one. The system+environment are still in a coherent superposition of states, and decoherence theory merely shows that it looks like it has decohered if you only have subjective knowledge on a small portion of the much larger coherent superposition of states.
If you believe that a superposition of states means it has no observable properties and is just purely a mathematical function, then decoherence does not solve your problem at all, because it is ultimately a subjective process and not a physical process. If you spent time studying the environment enough before running the experiment such that you could include the environment in your model then decoherence would not occur.
I’m literally not. My entire point is that it isn’t a solipsism. Any interaction causes the waveform to collapse.
Which, again, renders entanglement impossible, since objects must interact to become entangled.
If we accepted your personal self-theory, then quantum computers should be impossible, because the qubits all need to interact many many times over as the algorithm progresses for them to all become entangled and to create a superposition of states of the whole computer’s memory.
You are not listening and advocating things that are trivially wrong.
yet you give no explanation of an alternative. Something is happening. How do you explain it?
I just don’t deny value definiteness. That’s it. There is nothing beyond this.
Consider a perfectly classical world but this world is still fundamentally random. The randomness of interactions would disallow us from tracking the definite values of particles at a given moment in time, so we could only track them with an evolving probability distribution. We can represent this probability distribution with a vector and represent interactions with stochastic matrices. Given that the model does not include observable definite values, would it then be rational to claim that particles suddenly transform into an infinite-dimensional vector in configuration space when you’re not looking at them and lose all their observable properties? No, of course not. The particles still have real observable properties in the real world, but you just lose track of them in the model due to their random evolution.
You could create a simulation where you assign definite values and permute them stochastically at each interaction, and this would produce the same statistical results if you make a measurement at any given step. It is the same with quantum mechanics. It is just a form of non-classical statistical mechanics. There is no empirical, mathematical, or philosophical reason to deny that particles stop possessing real values when you are not looking at them. It is not hard to put together a simulation where the qubits are assigned definite bit values at all times and each logic gate just stochastically permutes those bit values. I even created one myself here. John Bell also showed you can do this with quantum field theory in his paper “Beables for Quantum Field Theory.”
Yes, that’s correct! Interacting with the barrier that creates the slits we don’t care about, but yes, that collapses it too.
Interacting with the surface we’re measuring in all the experiments. It doesn’t change, so it shouldn’t be effecting the results. It does collapse the waveform though, which is how we measure it.
Detecting it at the slit is the part that changes. If we don’t do this, we get wave-like behavior, because there’s no interaction until it hits the surface at the end. The wave can pass through both slits without any interaction. If we put in a detector, then it must interact with that to pass through, so it collapses the waveform and behaves like a particle at that point. This means it must be at one slit or the other, and not both.
What are the ‘detectors’ that were used?
(Looking at the image up there they have bars placed on the facing side of the slits there, is that the detector they were referring to when they said ‘monitoring’ it in this case?)
Ok, I see you’re ignorant actually. Interactions do not lead to the collapse, they are intrinsic part of quantum fields. Collapse happens when you step out of quantum picture with (mostly)linear equations and try to project the calculations onto the “classical picture”, whatever your cult of choice explains how that’s actually happening.
Yeah… no. There are multiple interpretations, but basically it’s when position is needed to be known that causes it. Until then, the position is in a superstate of all possible positions, but for an interaction to occur it needs to be in one position. It’s not about choice. It’s about when information is needed for a physical interaction to occur. If one occurs then the particle must be at that location.
This (at least your wording) implies that physics cares about our mathematical models. It doesn’t. Quantum mechanics and “classical” physics are just ways we organize things for education. Though we don’t have a model for it, the unvirse is not using two separate models of physics. There is no “quantum mechanics” and “classical physics”. There is only physics. When a measurement occurs the universe isn’t looking at it to see if it should use quantum rules or classical rules. The interaction just occurs.
Value indefiniteness is just solipsism. If particles do not have values when you are not looking, then any object made of particles also do not have values when you are not looking. This was the point of Schrodinger’s “cat” thought experiment. Your beliefs about the microworld inherently have implications for the macroworld. If particles don’t exist when you’re not looking at them, then neither do cats, or other people. This view of “value indefiniteness” you are trying to defend is indefensible because it is literally solipsism and any attempt to promote it above solipsism will just become incoherent.
You say:
This is trivially false, because then it would not be possible for two particles to become entangled on the position basis, which requires them to interact in such a way that depends upon their position values. The other particle would thus need to “know” its position value to become entangled with it, and if this leads to a “collapse,” then such entanglement could not occur. Yet we know it can occur in experiments.
If by “know” you mean humans knowing and not other particles, yeah, okay, but that’s obviously solipsism.
Any attempt to defend value indefiniteness will always either amount to:
I don’t blame them, it is literally the textbook Dirac-von Neumann axioms. That is how it is taught in schools, even though it is obviously incoherent. You are taught that there is a “Heisenberg cut” between the quantum and classical world, with no explanation of how this occurs.
The problem is that the orthodox interpretation of quantum mechanics does not even allow you to derive classical physics minus gravity in a limiting case from quantum mechanics. It is not even a physical theory of nature at all.
We know from the macroscopic world that particles have real observable properties, yet value indefiniteness denies that they have real observable properties, and it provides no method of telling you when those real, observable properties are added back to the world. It thus cannot make a single empirical prediction at all without this sleight-of-hand where they just say, as a matter of axiom in the Dirac-von Neumann textbook axioms of quantum mechanics that it happens “at measurement.”
If measurement is taken to be a subjective observation, then it is just solipsism. If measurement is taken to be a physical process, then it cannot reproduce the mathematical predictions of quantum mechanics, because this “Heisenberg cut” would be a non-reversible process, yet all unitary evolution operators are reversible. Hence, any model which includes a rigorous definition of “measurement” (like Ghirardi–Rimini–Weber theory) would include an additional non-reversible process. You could then just imagine setting up an experiment where this process would occur and then try to reverse it. The mathematics of quantum mechanics and your theory would inevitably lead to different predictions in such a process.
Therefore, again, if you believe in value indefiniteness, then you either (1) are a solipsist, (2) don’t believe in quantum mechanics but think it will be replaced by a physical collapse model, or (3) are confused.
The only way for quantum mechanics to be self-consistent is to reject value indefiniteness, at least as a metaphysical point of view. This does not require actually modifying the mathematics. If nature is random, then of course the definite values will evolve statistically such that they could not be tracked and included in the model. All you would need to then demonstrate is that quantum statistics converges to classical statistics in a limiting case on macroscopic scales, which is achieved by the theory of decoherence.
But the theory of decoherence achieves nothing if you believe in value indefiniteness, because if you believe quantum mechanics has nothing to do with statistics at all, then there is no reason to conclude that what you get in the reduced density matrices after you trace out the environment has anything to do with classical statistics, either.
There is no good argument in the academic literature for value indefiniteness. It is an incoherent worldview based on no empirical evidence at all. People who believe it often just regurgitate mindlessly statements like “Bell’s theorem proves it!” yet cannot articulate what Bell’s theorem even is or how on earth is proves that, especially since Bell himself was the biggest critic of value indefiniteness yet wrote the damned theorem!
They do have values. Their position is just a superposition, rather than one descrete one, which can be described as a wave. Their value is effectively a wave until it’s needed to be discrete.
Sure. That doesn’t make the general understanding of the thought experiment accurate. Once the decay of the atom that triggers the poison is detected, it’s no longer in a superposition. It has to not be in order for the detection to occur. The thought experiment is a meme because it’s absurd, and it is. That’s only because the entire premise is fundamentally flawed. It can’t exist as it’s implied. Also, even if this weren’t the case, that doesn’t actually prove it wrong. The double slit experiment shows that an interaction can change the result from wave-like to particle-like behavior.
I’m literally not. My entire point is that it isn’t a solipsism. Any interaction causes the waveform to collapse. Not a person observing it. The universe doesn’t care about what we describe as consciousness (or sapience, as it’s better described). It just does physics. The fact we don’t have a model for it doesn’t change anything.
This experiment shows that behavior can change just from a measurement. How do you explain that while also not allowing superpositions? You make claims about this meaning a few things (which I don’t agree with), and yet you give no explanation of an alternative. Something is happening. How do you explain it?
To quote Dmitry Blokhintsev: “This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.”
When I say “real values” I do not mean pure abstract mathematics. We do not live in a Platonic realm. The mathematics are just a tool for predicting what we observe in the real world. Don’t confuse the map for the territory. The abstract wave has no observable properties, it is pure mathematics. If the whole world was just one giant wave in Hilbert space, then this would be equivalent to claiming that the entire world is just one big mathematical function without any observable properties at all, which obviously makes no sense as we can clearly observe the world.
To quote Rovelli: “The gigantic, universal ψ wave that contains all the possible worlds is like Hegel’s dark night in which all cows are black: it does not account, per se, for the phenomenological reality that we actually observe. In order to describe the phenomena that we observe, other mathematical elements are needed besides ψ: the individual variables, like X and P, that we use to describe the world.”
Again, as I said in my first comment, any mathematical theory that describes the world needs to, at some point, include symbols which directly refer to something we can observe. An abstract mathematical function contains no such symbols. If you really believe that particles transform into purely mathematical waves, then you need some process to transform them back, or else you cannot explain what we observe at all, and so far the only process you have put forward is “it happens at every interaction” which is just objectively and empirically wrong because then entanglement would be impossible.
This is why you run into contradictions like the “Wigner’s friend” paradox where Wigner would describe his friend in a superposition of states, and if you believe that this literally means that all that exists inside the room is an abstract function, then you cannot explain how the observer in the room can perceive anything that they later claim they do, because there would be no observables inside of the room.
You cannot get around criticisms of solipsism by just promoting purely abstract mathematical entities to being “objective reality” as if objects transform into purely Platonic mathematical functions. At least, if you are going to claim this, then you need some rigorous process to transform them back into something that is described with mathematical language where some of the symbols refer to something we can actually observe such that we can then explain how it is that we can observe it to have the properties that it does when we look at it.
Please scroll up and read my actual comment. You seem to have skipped all the important technical bits, because you are claiming something which is mathematically incompatible with the predictions of quantum mechanics. Your personal self-theory you are inventing here literally would render entanglement impossible.
Decoherence is not relevant here. Decoherence theory works like this:
Notice that step #2 is entirely subjective. We are just assuming that the observer has lost track of the environment in terms of their subjective epistemic access, and step #3 is then akin to statistically marginalizing over the environment in order to then remove it from consideration.
This isn’t an actual physical transition but an epistemic one. The system+environment are still in a coherent superposition of states, and decoherence theory merely shows that it looks like it has decohered if you only have subjective knowledge on a small portion of the much larger coherent superposition of states.
If you believe that a superposition of states means it has no observable properties and is just purely a mathematical function, then decoherence does not solve your problem at all, because it is ultimately a subjective process and not a physical process. If you spent time studying the environment enough before running the experiment such that you could include the environment in your model then decoherence would not occur.
Which, again, renders entanglement impossible, since objects must interact to become entangled.
If we accepted your personal self-theory, then quantum computers should be impossible, because the qubits all need to interact many many times over as the algorithm progresses for them to all become entangled and to create a superposition of states of the whole computer’s memory.
You are not listening and advocating things that are trivially wrong.
I just don’t deny value definiteness. That’s it. There is nothing beyond this.
Consider a perfectly classical world but this world is still fundamentally random. The randomness of interactions would disallow us from tracking the definite values of particles at a given moment in time, so we could only track them with an evolving probability distribution. We can represent this probability distribution with a vector and represent interactions with stochastic matrices. Given that the model does not include observable definite values, would it then be rational to claim that particles suddenly transform into an infinite-dimensional vector in configuration space when you’re not looking at them and lose all their observable properties? No, of course not. The particles still have real observable properties in the real world, but you just lose track of them in the model due to their random evolution.
You could create a simulation where you assign definite values and permute them stochastically at each interaction, and this would produce the same statistical results if you make a measurement at any given step. It is the same with quantum mechanics. It is just a form of non-classical statistical mechanics. There is no empirical, mathematical, or philosophical reason to deny that particles stop possessing real values when you are not looking at them. It is not hard to put together a simulation where the qubits are assigned definite bit values at all times and each logic gate just stochastically permutes those bit values. I even created one myself here. John Bell also showed you can do this with quantum field theory in his paper “Beables for Quantum Field Theory.”
Cult mindset