Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Do quarks oscillate via Charged Weak Interaction (CKM Matrix)? In the Standard Model, with the introduction of the CKM matrix, we have that quark mixing between different generations is possible via a charged current (W boson).
My question is if this also implies that a given quark can change flavour (oscillate), auto-interacting with itself via a W boson.
For example, $$u \rightarrow W^{+} + d \rightarrow c$$
| Yes, this type of oscillation is possible. Your specific example is forbidden because there are no charmed baryon states whose masses overlap with the nucleon, so the $u\to c$ diagram you’ve drawn would have to be followed by a second weak loop to get you back to the initial zero-charm state.
However, the simpler virtual charged-current states,
$$
u\longrightarrow W^+s\longrightarrow u
\\
d\longrightarrow W^-c\longrightarrow d
$$
will contribute to the strangeness and charm of the virtual-quark “sea.”
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/729186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Explain Heisenberg's uncertainty principle There was one homework question that asks what Heisenberg uncertainty tell us about the energy of an electron in an infinite square well when the length of the well decreases. The correct answer is that the energy decreases when length increases. I know that the energy should decrease by the formula for energy eigenstate, but I feel like this has nothing to do with Heisenberg's uncertainty principle. Uncertainty principle only tells us how accurate is the measurement. Can someone explain how is the uncertainty in energy related to the actual energy of the electron?
| The infinite square well is a time-independent solution.
Energy is related to time through the HUP. Position is related to linear momentum.
It means that the infinite square well, with a definite energy, has arbitraryily large uncertainty on when the particle is in the well.
The width of the well is related to the uncertainty in location. So you need to get the uncertainty in position $x$ and momentum $p$. The momentum is given by the momentum operator $i h\frac{\partial}{\partial x}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/729303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 3
} |
Peskin and Schroeder's QFT book page 289 On Peskin and Schroeder's QFT book page 289, the book is trying to derive the functional formalism of $\phi^4$ theory in first three paragraphs. But the book omits many details (I thought), so I have some troubles here.
For the free Klein-Gordon theory to $\phi^4$ theory:
$$ \mathcal{L}=\mathcal{L}_0-\frac{\lambda}{4 !} \phi^4. $$
Assuming $\lambda$ is small, we expand
$$\exp \left[i \int d^4 x \mathcal{L}\right]=\exp \left[i \int d^4 x \mathcal{L}_0\right]\left(1-i \int d^4 x \frac{\lambda}{4 !} \phi^4+\cdots\right). $$
Here I thought the book use one approximation, since $\phi^4$ don't commute with $\mathcal{L_0}$, their have $\pi$ inside $\mathcal{L_0}$. And according to Baker-Campbell-Hausdorff (BCH) formula, the book omit order to $\lambda$. Is this right?
The book further says on p. 289:
"Making this expression in both the numerator and denominator of (9.18), we see that each is expressed entirely in terms of free-field correlation functions. Moreover, since $$ i \int d^3 x \mathcal{L}_{\mathrm{int}}=-i H_{\mathrm{int}},$$ we obtain exactly the same expression as in (4.31)."
I am really troubled for this, can anyone explain for me?
Here eq. (9.18) is
$$\left\langle\Omega\left|T \phi_H\left(x_1\right) \phi_H\left(x_2\right)\right| \Omega\right\rangle=$$
$$\lim _{T \rightarrow \infty(1-i \epsilon)} \frac{\int \mathcal{D} \phi~\phi\left(x_1\right) \phi\left(x_2\right) \exp \left[i \int_{-T}^T d^4 x \mathcal{L}\right]}{\int \mathcal{D} \phi \exp \left[i \int_{-T}^T d^4 x \mathcal{L}\right]} . \tag{9.18} $$
And eq. (4.31) is
$$\langle\Omega|T\{\phi(x) \phi(y)\}| \Omega\rangle=\lim _{T \rightarrow \infty(1-i \epsilon)} \frac{\left\langle 0\left|T\left\{\phi_I(x) \phi_I(y) \exp \left[-i \int_{-T}^T d t H_I(t)\right]\right\}\right| 0\right\rangle}{\left\langle 0\left|T\left\{\exp \left[-i \int_{-T}^T d t H_I(t)\right]\right\}\right| 0\right\rangle} . \tag{4.31} $$
| User Zack has already answered OP's first part. Concerning the equivalence between the interaction picture (4.31) and the path integral formulation in the Heisenberg picture (9.18), note that P&S presumably assumes no derivative couplings/interactions. For the latter case, see e.g. this Phys.SE post.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/729431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
"Lowest yield" atomic weapon possible I would ask that folks be tolerant of my ignorance in this field. When discussing tactical and strategic nuclear weapon yield I wonder about what is the lowest possible fission weapon is possible. A "dirty bomb" (conventional bomb spreading nuclear contamination) is not the question. I would like education on the topic.
| The yield of many weapon designs can be adjusted, typically by changing the firing time of external neutron initiators or the quantity of deuterium-tritium boosting gas injected into the core.
The lowest yield setting on a fielded US nuclear weapon was 10 tons TNT equivalent, on the W54 warhead as used in the Davy Crockett artillery shell and the SADM backpack bomb. In principle they could be reduced lower than that, but I can imagine it being difficult to accurately regulate the yield when it's right at the threshold of a fizzle.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/729650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Change in absolute magnitude caused by a change in apparent magnitude Imagine that we have an object with apparent magnitude $m_1$, later, we observe that the same object appears with an apparent magnitude $m_2 = m_1 + \alpha$, then what we can say about the absolute magnitude M? By the definition I believe we find that the absolute magnitude after the change in apparent magnitude will be $M_2 = M_1 + \alpha$. But this make sense? I mean, the absolute magnitude is that, absolute, what are your thoughts in this? What will be the change in the absolute magnitude after a change in the apparent magnitude?
| The absolute and apparent magnitude are simply related by the distance to the source.
If the distance is fixed and the apparent magnitude changes, then the absolute magnitude must have changed by the same amount.
The absolute magnitude is not called that because it can never change - there are lots of examples of variable stars. It merely refers to the fact that it is the apparent magnitude if the object were at a defined distance of 10 pc.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/730112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Low temperature behavior for ferromagnets: theoretical and experimental discrepancies This is in reference to page 326, 327 of introduction to solid state physics, 8th edition by Charles Kittel
The mean field theory does not give a good description of the variation of $M$ at low temperature. For $T<<T_{c}$ the argument of $\tanh$ in (9) is large and $\tanh(\xi) = 1 - 2e^{-2\xi}$ + ..
This appears to be a Taylor expansion for $\tanh$ but the fact that there is an exponential term is foreign to me.
Question:
*
*Can anyone shed some insight on how the expression for $\tanh(\xi)$ provided by the author came to be?
*Am I correct to understand that $M(0)$ is the temperature at $T =0$ for the magnetisation given in (8) on page 362? Even better is someone could show the steps behind which the magnetic deviation $\delta M$ in (10) came about.
| For the first question you can rewrite $\tanh$ in terms of exponentials, see e.g. this Math.SE post for details. For the second question, yes, $M(0)$ is the magnetization in the zero temperature limit, where you approximate $\tanh \xi \approx 1$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/730248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why does a piece of thread form a straight line when we pull it? Experience tells that if we pull a piece of thread, it forms a straight line, a geodesic in the Euclidean space. If we perform a similar experiment on the surface of a sphere, we will get an arc of a great circle, which is also a geodesic.
How to show this in general, for any geometry?
| If we pull a piece of thread, it should form a catenary curve (a curve resembling a hyperbolic curve) I guess!
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/730469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 2
} |
Clarifying volume symbol notation with a slash through it I am reading Munson's book on Fluid Dynamics. One thing I found confusing was this notation in the image below, where the Volume has a slash or strikethrough through it. I am not clear about the meaning of that notation. Does it have something to do with intensive versus extensive properties from thermodynamics?
Can anyone clarify the reason for the slash throgh the volume symbols?
| Munson uses a barred, italicized V to distinguish—when the context doesn't make it clear—volume from velocity, which is shown upright and in bold. (Subscripts referring to volume are italicized and in lowercase, as shown in your example.)
The barred V doesn't seem to indicate any special type of volume, as it's used throughout the text for control volumes, enclosure volumes, fluid volumes, and infinitesimal volumes, for instance.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/730754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
I'm having trouble understanding the intuition behind why $a(x) = v\frac{\mathrm{d}v}{\mathrm{d}x}$ I was shown
\begin{align}
a(x) &= \frac{\mathrm{d}v}{\mathrm{d}t}\\
&= \frac{\mathrm{d}v}{\mathrm{d}x}\underbrace{\frac{\mathrm{d}x}{\mathrm{d}t}}_{v}\\
&= v\frac{\mathrm{d}v}{\mathrm{d}x}
\end{align}
However, this feels somewhat unintuitive, and somewhat questionable mathematics-wise. Perhaps it's the best way to explain it, but I was hoping for a more intuitive understanding of this formula.
| the kinematic equations are
$$x(t)=f(t)\quad\Rightarrow\\
v(t)=\frac{dx(t)}{dt}=\frac{df(t)}{dt}\\
a(t)=\frac{dv(t)}{dt}= \frac{d^2f(t)}{dt^2}$$
now if you want to obtain the acceleration $~a=a(x)~$ first you eliminate the parameter $~t~$ with the equation $~x=f(t)\quad\Rightarrow~t=g(x)~$ hence
$$a(x)=\frac{d^2f(t)}{dt^2}\bigg|_{t=g(x)}$$
but also with:
$$t=g(x)\quad,dt=\frac{\partial g(x)}{\partial x}\,dx\quad\Rightarrow\\
v(x)=\frac{dx}{dt}=\frac {1}{\frac{\partial g(x)}{\partial x}}\\
a(x)=\frac{dv}{dt}=\frac{dv(x)}{dx}\,\frac{dx}{dt}=\frac{dv(x)}{dx}\,v(x)$$
Example:
$$x=f(t)=c\,t^2\quad\Rightarrow\\
t=\frac {\sqrt{c\,x}}{c}=g(x)\quad,g'(x)=\frac{1}{2\sqrt{c\,x}}\\
v(x)=2\,\sqrt{c\,x}\quad,v'(x)=\frac{c}{\sqrt{c\,x}}\\
a(x)=\frac{c}{\sqrt{c\,x}}\,2\,\sqrt{c\,x}=2\,c $$
or
$$a(x)=\frac{d^2f(t)}{dt^2}\bigg|_{t=g(x)}=2\,c $$
also
$$a(x)=-{\frac {{\frac {d^{2}}{d{x}^{2}}}g \left( x \right) }{ \left( {\frac
{d}{dx}}g \left( x \right) \right) ^{3}}}
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/730923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 3
} |
Thermal Equilibrium and adiabatic walls - Zemansky In Zemansky's "Heat and Thermodynamics" it is stated that:
A thermodynamic system is in thermal equilibrium with its sorroundings iff it is in mechanical and chemical equilibria with its sorroundings, it is delimited by diathermic walls and its macroscopic coordinates do not change with time (hence they may be called thermodynamic coordinates).
A thermodynamic system is in thermodynamic equilibrium with its sorroundings iff it is in mechanical, chemical and thermal equilibria with its sorroundings.
Now, I have two questions concerning these definitions:
*
*Is there any difference between thermal and thermodynamic equilibria? (it seems like there should be, at least that is what I have read in the dedicated wikipedia page: https://en.wikipedia.org/wiki/Thermal_equilibrium);
*If diathermic walls are needed in the definition of thermal equilibrium, how is it possible to use the same concept with systems that are delimited by adiabatic walls? (indeed, Zemansky speaks of such systema as if they can be in an equilibrium state).
As always, any comment or answer is much appreciated and let me know if I can explain myself clearer!
| For an adiabatic system to be in thermodynamic equilibrium, there can be no spatial variations within the system in temperature, pressure, or chemical potential.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/731227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Standing waves in a resonance tube I am doing an experiment about standing waves in a resonance tube. I use a bucket of water, a waterproof tube (open at both ends), and a frequency generator app. I have two set-ups, A and B:
A. Setting the frequency to be constant and dipping the tube into the bucket of water until I hear resonance that signifies standing waves. I record the effective length of tube at which this occurs, from the part where the tube touches the water to the other end. Then, I compute the corresponding wavelengths.
B. Setting the length to be constant but with increasing frequency. I record the frequency at which the standing waves occur and compute the wavelengths.
Using the formula for the speed of sound $v =$ wavelength x frequency, I have solved the experimental speed of sound. Now, using the room temperature to solve for the theoretical speed of sound, I found out that the Set B resulted to less deviation from this theoretical value compared to the case of Set A. Is this expected in general? What do you think are the main sources of error?
| It's hard to comment on a specific experimental setup, but I am not surprised by your result. In B, the machine can control the value of the frequency much more precisely than you can manually submerge the tube into the water to a certain length in A.
One thing to try calculating is, if your length measurement is off by say $\mathrm{0.1 \, cm, 0.5 \, cm, \,or\, 1 \, cm}$, what % change does that produce in the speed of sound result?
Another thing to assess is whether your Method A results seem to be consistently too high, too low, or randomly scattered around the true value.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/731350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to derive the $vx/c^2$ term from first principles? In Lorentz transforms, the formula for time transformation is
$$t' = \gamma \left( t - \frac{v x}{c^2} \right)$$
I understand that the term $\frac{v x}{c^2}$ represents "time delay" seen by a stationary observer but I don't understand how to derive it from first principles. I understand $v/c$ as speed and $x/c$ as distance. Why multiply speed with distance? I thought time is distance divided by speed?
| I'm not sure there's a meaningful way to derive it apart from the full Lorentz Equations, but one thing reflected in the equation is that a time interval $t'$ for one observer will correspond to partly a time interval $t$ and partly a space interval $x$ for a different observer.
This happens in exactly the same way that in 2 dimensional Euclidean geometry, in a plane with two different sets of axes set at an angle, an interval that is fully in the $x$ direction in one axes will have components in both the $x'$ and $y'$ of the other axes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/731491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Where does energy go? Where does energy go?
Given is the Michelson interferometer. One sends light in
in the form of a plane wave $E_0\exp[i(kx-wt)]$ into the interferometer.
The position of one of the mirrors is adjusted in such a way,
so that at the output of the interterometer
completely destructive interference takes place. Where does the energy go?
In my opinion, the energy is conserved. But I do not know how to explain this. I think it’s up to Poynting-vector but I am not sure
| The Michelson interferometer produces two beams. One is the usual one typically directed toward a "screen" or photodetector. The other is reflected back toward the source (laser).
If you adjust the mirrors for destructive interference at the output beam, you will have constructive interference at the reflected beam, and vice versa.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/731630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is the continuity equation used to **define** the current density? Recall the continuity equation:
$$\frac{\partial}{\partial t}\rho+\boldsymbol{\nabla\cdot J}=0$$
Given $\rho$, there is obviously not a unique solution $\boldsymbol{J}$, but I guess one could choose an additional requirement (e.g. some condition on the rotation) such that there is a unique solution (by the Helmholtz theroem?). The idea may be to pick the most "obvious" solution. So I hope that someone can elaborate on the mathematical aspect and tell us if this is actually done in practice.
| The continuity equation is not the definition of the current density.
One way to define the current density $\mathbf J(\mathbf r)$ is via the three components $J_x(\mathbf r), J_y(\mathbf r), J_z(\mathbf r)$ where
\begin{equation}
J_x(\mathbf r) = \lim_{s \to 0} \frac{I_x(\mathbf r, s)}{\pi s^2}
\end{equation}
where the notation $I_x(\mathbf r, s)$ means the current passing in the positive x-direction through a disk that is perpendicular to the x-axis, centered around the point $\mathbf{r}$, with radius $s$. $J_y$ and $J_z$ are defined similarly. If you choose to use this definition, you might have to do a bit of work to show that the $\bf J$ so defined actually transforms as a vector. However, I think it has pedagogical value in the context of this question: it shows that "current density" is fundamentally nothing more than current per unit area.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/731976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Could you feel your weight falling through the a tube drilled through the center of the earth? Suppose you drill a hole through the center of the earth (assume the earth is uniform and no air resistance) and you jump in. Would you be "weightless" throughout the entire fall?
The reason I ask is that if you jump off a cliff, throughout the entire fall you feel weightless (just like when astronauts train for the weightless feeling in orbit, they practice by going in an airplane and having the airplane fall to approximate the experience). Does this same weightless experience happen when you are falling through the center-of-the-earth tube?
I know that if you are stationary at the center of the earth, then you are weightless; but, I'm interested in falling throughout the entire hole.
The reason why I'm confused is that it's well-known that when you fall, you oscillate (simple harmonic motion) up and down the tube and this oscillation seems to imply that you will feel your weight.
|
Suppose you drill a hole through the center of the earth (assume the earth is uniform and no air resistance) and you jump in. Would you be "weightless" throughout the entire fall?
Yes, because you would follow the spacetime geodesics. The case you described has been precise analyzed by Edward Parker in A relativistic gravity train, where he calculates the trajectory of a test particle under the gravitational influence of a ball with
uniform mass density as the particle falls through the ball’s diameter.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/732165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Branes in Closed Bosonic String Theory I've seen in these lectures by Freddy Cachazo that type II-A/B superstring theory has to contain D-branes and open strings non-perturbatively, even though it appears to only contain closed strings in the perturbation theory. I've also read here that heterotic superstring theory might also contain p-branes.
Is it known whether closed bosonic string theory contains branes, i.e. objects different from the fundamental string, even if they don't appear perturbatively, as in the type II A/B superstring case?
| I would say that the bosonic string doesn't make sense non-perturbatively so the question of whether it contains non-perturbative objects is meaningless to begin with.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/732327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Does a random number generator have real entropy? In thermodynamics, entropy is defined for gases. Of course, my laptop is not a gas. However, it contains a random number generator and I have seen the word ‘entropy’ being used in this context. Is this the same entropy? How can this entropy be linked to the definitions from thermodynamics?
UPDATE
I think my question is different from this question. That question is about information content, for example a book. However, this question is about the entropy of a random number generator. Those seem to be different because the contents of a book are fixed while the output of a random number generator is not yet fixed.
| I am not sure about my expertise. But here is a shot anyway.
The 'random number generator' will keep giving numbers out. And it never ends. So it keeps drawing energy out from your computer. And I doubt you can covert the heat created back to energy which would power your computer.
Of course the random-number-generator may be applied to solving problems in mathematics, physics which in turn will generate energy back.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/732556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 7,
"answer_id": 6
} |
If the spin operator is a matrix, why aren't the momentum or position operators a matrix? So my question is to do with the fact that some operators seem to be matrices while others are not. I suspect if the eigenvalues of an operator has continuous eigenvalues then the operator is not a matrix but if it is discrete it is? I hoped for an explanation of why this might be the case?
| *
*If the position operator $\hat{x}$ and the momentum operator $\hat{p}$ were represented by finite-dimensional square matrices, then ${\rm Tr}[\hat{x},\hat{p}]=0$, which would violate the CCR, cf. e.g. this Phys.SE post.
For their infinite-dimensional representations, see instead the theorem of Stone and von Neumann.
*In contrast, the $so(3)$ Lie algebra of spin operators has finite-dimensional representations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/732694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Is the resonance of a wine glass and resonance in an electrical circuit the same thing? I am quite a noob at Physics, but I find it quite interesting, and resonance was especially intriguing when I first found out about it, but now that I have done a little bit of research, I either get results of a wine glass (or any object) breaking by the use of its natural frequency or I get a video/article that explains resonance in a circuit with an inductor and capacitor. Is the concept that allows both of these phenomena the same thing? Or are they completely different things that are just coincidentally called resonance?
| Yes, resonance in both these different phenomena is conceptually the same thing: it is when external driving force and velocity of the driven object (electromotive force acting in circuit and electric current in the circuit) oscillate in phase. This can happen only when frequency of the external oscillating force gets equal to the so-called natural frequency (or resonant frequency) of the object, which is fixed by object's properties such as mass, stiffness, or in case of a circuit, by inductances, capacitances and resistances of its components.
From mechanics of driven harmonic oscillator we know that in such phase agreement, transfer of work to the oscillating body is the most efficient one (think of a person pushing a child on a swing, it's easiest when phases match).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/733024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How can I show the following contraction of the electromagnetic field strength and its dual? Given the electromagnetic field strength $F^{\mu\nu}$, and its dual $$\tilde{F}^{\mu\nu} =\dfrac{1}{2}\varepsilon^{\mu\nu\alpha\beta}F_{\alpha\beta},$$
how can I show that
$$\tilde{F}^{\mu\nu}F_{\nu\rho} = -\dfrac{1}{4}\delta^\mu_\rho \tilde{F}^{\alpha\beta}F_{\alpha\beta} ??? $$
| A possible proof can be:
From lorentz invariance you have
$
\tilde{F}^{\mu \nu} F_{\rho \nu}= C \delta_{\rho}^{\mu} , \quad C \in \mathbb{R}
$
now taking the trace both side you obtain $C$
$
\tilde{F}^{\alpha \nu} F_{\alpha \nu}= C \delta_{\alpha}^{\alpha}= 4 C
$
so
$
\tilde{F}^{\mu \nu} F_{\rho \nu}= \frac{1}{4} \delta_{\rho}^{\mu} \tilde{F}^{\alpha \beta} F_{\alpha \beta}
$
finally switching the index ($\rho \leftrightarrow \nu$) and using $F_{\rho \nu}=-F_{\nu \rho}$ you obtain
$
\tilde{F}^{\mu \nu} F_{\nu \rho}= -\frac{1}{4} \delta_{\rho}^{\mu} \tilde{F}^{\alpha \beta} F_{\alpha \beta}
$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/733554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Explain Feynman's explanation why KE + PE = constant I'm reading Feynman's lecture on physics, and I'm having trouble following the logic. In section 14-4 he says:
"Now we have the following two propositions: (1) that the work done by a force is equal to the change in kinetic energy of the particle, but (2) mathematically, for a conservative force, the work done is minus the change in a function $U$ which we call the potential energy. As a consequence of these two, we arrive at the proposition that if only conservative forces act, the kinetic energy $T$ plus the potential energy $U$ remains constant:
$T + U =$ constant."
I'm having trouble understanding the logic. How do the deduce T + U = constant from the two proposition? I don't see how one follows from the other. I know T + U = constant from conservation of energy, but it seems like Feynman is using a different method here..
| You have the following equations:
$
\begin{align}
& W= \Delta T \quad \text{Work-Energy theorem} \\
& W= -\Delta U \quad \text{only valid in case of conservative force}
\end{align}
$
so
$
\Delta T=-\Delta U
$
by definition you have $\Delta T= T_f-T_i$ and also $ \Delta U= U_f-U_i$, so the last equation became
$
T_f-T_i= U_i-U_f \rightarrow T_f+U_f=T_i+U_i
$
As you can see, you can finally define a new quantity which remain constant during the motion
$
E= T+U
$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/733707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why is it possible to neglect higher order terms in the variation of the action? In order to get the Euler-Lagrange equations, we should find the variation of the action $\delta S$ and to neglect higher-order terms:
$$\delta S=\int L(q+\delta q,\,q'+\delta q',\,t)dt-\int L(q ,\,q',\,t)dt+O[(\delta q)^2]$$
I have two questions:
*
*Why is it legal to neglect the higher-order terms?
*If we get the Euler-Lagrange equations from a first-order approximation, doesn't it means that the equations themselves are only an approximation?
| The following is not mathematical rigorous, but a sketch of the central ideas.
To start, let us consider the case of a (differentiable) real scalar function $f: \mathbb R \longrightarrow \mathbb R$. We say that $x$ is a stationary point if and only if $f^\prime(x)=0$. The Taylor expansion of $f$ around a point $x$ is given by:
$$f(x+h) = f(x) + f^\prime(x) \, h + \mathcal O(h^2) \quad. \tag{1} $$
Now if $x$ is a stationary point, we have $f^\prime(x) =0$, which regarding equation $(1)$ yields
$$ f(x+h)-f(x) = \mathcal O(h^2)\tag{2} \quad . $$
What this means is that small changes of $x$ induce only changes to second (and higher) order in $f$ if $x$ is a stationary point.
Conversely, if equation $(2)$ holds, then $x$ is a stationary point, which can be seen by dividing $(2)$ with $h$ and taking the limit $h \to 0$, yielding $f^\prime(x)=0$.
This also means that you can find the stationary points of $f$ by (Taylor) expanding it in terms of $h$ and set the terms proportional to $h$ to zero. But we don't neglect any terms or approximate anything.
The very same line of thought applies to functionals: Indeed, consider a functional $S:F\longrightarrow \mathbb R$, with a suitable chosen space of functions $F$.
We define the $n$-th functional derivative as
$$ \delta^{n} S[f][\eta]:=\frac{\mathrm d^n S[f+\epsilon \, \eta]}{\mathrm d \epsilon^n} \big\vert_{\epsilon=0}\tag{3} \quad ,$$
where $\epsilon \in \mathbb R$ and $\eta$ denotes a suitable function. We can further define a Taylor series as follows:
$$ S[f+h\eta] = S[f] + \sum\limits_{n=1} \frac{h^n}{n!}\, \delta^{n} S[f][\eta] \tag{4}$$
We say that $S$ is stationary at $f$ if the first functional derivative vanishes for all $\eta$, or, equivalently, $S$ does not change to first order for a small change of $f$, i.e. for all $\eta$ we have
$$ S[f+h \eta] - S[f] = \mathcal O(h^2)\tag{5} \quad .$$
Again, note that $(5)$ is equivalent to say that the functional derivative of $S$ at $f$ vanishes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/733860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How do we prove that the 4-acceleration transforms as a 4-vector in Special Relativity? In order to define the acceleration of a body in its own frame, we need to first prove that the acceleration is a four-vector so that its dot product with itself can then be labeled as acceleration squared in the rest frame. For velocity and displacement vectors, we can show that they have a constant dot product. But how do we prove that for acceleration?
| In physics, we prove things with experiments. Four-vectors are components of a mathematical model. Does that model pass experimental test? Yes, in many cases. We use it because it passes those tests, not because of any mathematical proof.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/734014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Higher Dijkgraaf-Witten Theory I am trying to understand higher-form symmetries in TQFT. In particular the higher-form version of Dijkgraaf-Witten Theory.
It is known that for a 0-form symmetry we can specify the principal G-bundle through homotopy classes of the classifying map
$$ M \rightarrow BG = K(G,1). $$
This is known from Homotopy Theory and Eilenberg-MacLan spaces. Indeed the homotopy classes of these maps are in bijection with the first cohomology group $H^1(M,G)$ that for a finite group is isomorphic to $\operatorname{Hom}(\pi_1(M),G)$ and fit the usual gauge theory:
$$ [M,K(G,1)] \simeq H^1(M,G) \simeq \operatorname{Hom}(\pi_1(M),G) $$
I cannot find any reference for a higher version of this. Should I expect a naive generalization? This is motivated by the fact that for a 1-form symmetry $H^2(M,G)$ works as a straightforward generalization to the previous case. But does homotopy theory tell me something about the classification of gerbes via classifying maps?
There is a follow-up question to this, when the symmetry structure is an honest 2-group.
| Higher-form symmetries are abelian so, with $G$ a discrete abelian group and $p\in\mathbb{Z}_{\geq 0}$ (or $G$ a discrete group, not necessarily abelian, if $p=0$):
$$ [M, K(G,p+1)] \cong \mathrm{H}^{p+1}(M;G) \cong \operatorname{Hom}\left(\pi_{p+1}(M),G\right) $$
and everything works as it should.
See e.g. the Wikipedia page for Eilenberg-McLane spaces or the paper From gauge to higher gauge models of topological phases by Delcamp and Tiwari$^{(*)}$.
$^{(*)}$In that paper they do not explicitly mention $K(G,p+1)$, but they define it descriptively as a higher-classifying space $B^{p+1}G$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/734140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How do non-holonomic constraints work in Hamiltonian formalism? In Lagrangian formalism, if $(M, g)$ is our configuration manifold, equipped with a Riemannian metric $g\in Hom(TM\bigotimes TM, \mathbb{R})$, Lagrangian function $\mathcal{L} : M\times TM\times [0, 1]\rightarrow \mathbb{R}$ is defined as $$\mathcal{L}(x(t), \dot{x}(t), t)= \frac{1}{2}mg(\dot{x}(t), \dot{x}(t))-U(x(t), t).$$ If our particle flows under the influence of gravity, then $$\frac{d}{dt}(\frac{\partial \mathcal{L}}{\partial \dot{x}})-\frac{\partial \mathcal{L}}{\partial x} = 0.$$ And by using Lagrangian multiplier, we could generalize this PDE for holonomic physical systems. But unfortunately, there's no canonical way to use Lagrangian mechanics for non-holonomic physical systems. At this point, how does Hamiltonian mechanics behave?
| It depends on whether the first principle for the Lagrangian formulation is
*
*a variational principle [i.e. the stationary action principle (SAP)],
*or not,
cf. e.g. this & this related Phys.SE posts.
*
*In the 1st case, one can in principle find a Hamiltonian formulation via a singular Legendre transformation, cf. e.g. my Phys.SE answers here & here.
*In the 2nd case, it is not entirely clear what the corresponding Hamiltonian formulation is. See also e.g. this related Phys.SE post.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/734340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The uncertainty principle for angular momentum and angular position For $$L_{z}=xp_{y}-yp_{x}$$ we see that angular position in the $x-y$ plane is canonically conjugate,
$$\theta_{x-y}=\mathrm{tan}^{-1}\left(\frac{y}{x}\right)$$
that is,
$$\{\theta_{x-y},L_{z}\}=1$$
where $\{\}$ represent Poisson brackets. Applying canonical quantisation, we see that
$$[\hat{\theta}_{x-y},\hat{L}_{z}]=i\hbar$$
therefore, applying the generalised uncertainty principle, we should have
$$\Delta\hat{\theta}_{x-y}\Delta\hat{L}_{z}\geq\frac{\hbar}{2}$$
This is where I have some trouble though. We know that angular momentum in QM is quantised in units of $\hbar$, and hence measurements of $L_{z}$ lead to exact measurements. Hence some angular state of $\hat{L_{z}}$,
$$\hat{L}_{z}|m\rangle=m\hbar|m\rangle$$
has an uncertainty of angular momentum of 0, $\Delta \hat{L}_{z}=0$. This doesn't make sense, as it seems to suggest that $\Delta\hat{\theta}_{x-y}=\infty$, i.e eigenstate of angular momentum correspond to states where we have complete negligence to the angle of the state in the $x-y$ plane in position space. So I guess my question is, do we have to treat the generalised uncertainty principle differently for angular momentum since it is a discrete observable?
| You don't even need the uncertainty principle for a contradiction here: $\widehat{L}_z$ is hermitian ($\widehat{L}_z^\dagger=\widehat{L}_z$) and therefore by applying $\cdot^\dagger$ to your last equation, we get $\langle m|\widehat{L}_z=m\hbar\langle m|$, which directly results in the contradiction:
$$i\hbar
=i\hbar\langle m|m\rangle
=\langle m|[\widehat\theta_{x-y},\widehat{L}_z]|m\rangle
=\langle m|\widehat\theta_{x-y}\widehat{L}_z|m\rangle
-\langle m|\widehat{L}_z\widehat\theta_{x-y}|m\rangle
=m\left(\langle m|\widehat\theta_{x-y}|m\rangle
-\langle m|\widehat\theta_{x-y}|m\rangle\right)=0$$
I think the problem here is the canonical quantization, which simply might not preserve the Poisson bracket. An important result of canonical quantization is that there is no quantization map $Q$ from the maps on a classical phase space to a quantum Hilbert space, so that:
$$[Q(f),Q(g)]=i\hbar Q(\{f,g\})$$
is always fulfilled.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/734498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Newton's Second Law and External Forces I was reading about Newton's Second Law, and I saw that only external forces can move a body. However, when animals and people walk, when rockets launch, and cars drive, isn't it an internal force that causes a change? How do these things fit into Newton's Second Law?
| One has to be careful how to define what is the system under consideration so as to know what is internal and external. When animals and people walk, they are acted on by an external force - the friction between their feet and the ground. With rockets, hot gases are forced out the back. They exert an equal and opposite, now external, force on the rocket.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/734709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Definition of momentum We say that momentum is the measure of how a body is moving or the quantity of movement inside a body
But what this definition really mean?
This terms are very vague
$p=mv$,why the movement inside the body depend on it's mass?
| An object's momentum is the product of its mass and its velocity.
$$\vec P = m \vec v$$
Nothing vague about it.
Why is this referred to as the "quantity of motion"? That's more of a history os science question.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/735142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
Are thermodynamic quantities based on frame of reference The Kinetic Energy $mv^2/2$ does depend on the FoR, and hence maybe the internal energy?
I've also seen temperature being defined as "a measure of the average kinetic energy of the particles".
Are thermodynamic quantities based on frame of reference? Is there a distinction in this case between state functions like "entropy" and path functions like "work done"?
| For simplicity and convenience, we often conduct thermodynamics using a frame of reference in which the system of interest is motionless, but this is not essential. I quote at length from Callen's Thermodynamics and an Introduction to Thermostatics:
In accepting the existence of a conserved macroscopic energy function
as the first postulate of thermodynamics, we anchor that postulate directly
in Noether's theorem and in the time-translation symmetry of physical
laws.
An astute reader will perhaps turn the symmetry argument around.
There are seven "first integrals of the motion" (as the conserved quantities
are known in mechanics). These seven conserved quantities are the energy,
the three components of linear momentum, and the three components of
the angular momentum; and they follow in parallel fashion from the
translation in "space-time" and from rotation. Why, then, does energy
appear to play a unique role in thermostatistics? Should not momentum
and angular momentum play parallel roles with the energy?
In fact, the energy is not unique in thermostatistics. The linear momentum and angular momentum play precisely parallel roles. The asymmetry
in our account of thermostatistics is a purely conventional one that obscures
the true nature of the subject.
We have followed the standard convention of restricting attention to
systems that are macroscopically stationary, in which case the momentum
and angular momentum arbitrarily are required to be zero and do not
appear in the analysis. But astrophysicists, who apply thermostatistics to
rotating galaxies, are quite familiar with a more complete form of thermostatistics. In that formulation the energy, linear momentum, and angular
momentum play fully analogous roles. [emph. added]
Callen then gives an example involving a stellar atmosphere in motion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/735610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Momentum: How can a rolling wheeled vehicle turn 180 degrees without stopping? "In Newtonian mechanics momentum is the product of the mass and velocity of an object. It is a vector quantity, possessing a magnitude and a direction. An object will stay still or keep moving at the same speed and in a straight line, unless it is acted upon by an external force."
How then, is it possible for a rolling wheeled vehicle to turn 180 degrees (reversing the direction of its momentum/kinetic energy) without stopping?
An example: when driving my car, to bring my forward kinetic energy to zero I apply my brakes and convert it to heat energy. To move in the opposite direction I then need to set the gearbox in reverse and apply extra energy. Yet if I switch off the engine (adding no new kinetic energy) and turn the wheel, then I can achieve the same effect (with small losses due to wheel friction) without dispersing or exerting any energy. How is this possible?
|
How can a rolling wheeled vehicle turn 180 degrees without stopping ?
The answer is in the last sentence of the passage you quote:
An object will stay still or keep moving at the same speed and in a straight line, unless it is acted upon by an external force
The external force that acts on a coasting car to change its direction is the sideways friction between the car's tyres and the road. If the road were perfectly smooth then you could not steer the car. Similarly, if the car's wheels could not turn at an angle to its direction of motion, then it could not be steered.
There are vehicles without steerable wheels, but they all use various mechanisms for creating a difference in friction from one side of the vehicle to the other. A sledge is steered by the rider shifting their weight from one side to the other. A tank is steered by making its tracks move at different speeds.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/735926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Size of metal domain needed to reflect light ; are small graphene sheets shiny? I remembered that shininess of a material is because of reflection, ie surface current responding to light. Mathematically, one can solve Maxwell equations under a relevant boundary condition, with plane waves ansatz. This math only corresponds to situations when the size of the metal surface is way larger than the wavelength of light. What happen if the surfaces are smaller than wavelength of say ~600nm light?
What happen for metals that have domains size less than the wavelength?
What happen for small graphene sheets whose edges are passivated by hydrogen, and are shorter than the wavelength? If electrons cannot move across different domains that are shorter than the wavelength, can the material possibly be shiny?
| We say something is shiny when the reflection from it is specular i.e. the angle of reflection is equal to the angle of incidence. Note that it doesn't matter whether the substrate is a metal (i.e. conducting) or not as insulating plastics can also be shiny.
The problem is that objects of around the wavelength of light or small reflect light in all directions. This is known as Mie scattering, or if the object is much smaller than the wavelength we get Rayleigh scattering instead. In both cases light is reflected at a range of angles, not just at the angle of incidence, and the reflection is diffuse rather than specular.
There isn't a sharp transition to specular reflection as the size of the reflecting object increases, so the size at which we would say the object is shiny is a matter of judgement. From personal experience mica flakes of around 10 $\mu$m and larger look shiny so as an order of magnitude estimate I would say the transition is somewhere around ten times the wavelength of the light.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/736083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Non- Local operators and Entanglement Given a separable state, $|\psi\rangle$ = $|a\rangle\otimes|b\rangle$, operating on this state with a local operator of the form, $A\otimes B$ will not lead to an entangled state. Is the converse true? i.e., given that I know that action of an operator on a separable state is a separable state, can I conclude that the operator must've been a local operator?
or will the action of a non-local operator always entangle a separable state?
| No.
The swap operator will never entangle any product state, yet it is not a local operator.
(Note, however, that it will create entanglement if it acts on part of a larger system.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/736254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does Proca's hypothesis make sense of giving mass to the photon in reference to special relativity? The Romanian physicist Proca formulated his famous Lagrangian to describe a hypothetical massive photon. From it we derive, as equations of motion, the relations that the electric and magnetic fields must obey (the analogous of Maxwell's equations for massive photons). We know that Einstein deduced the Lorentz transformations by raising the constancy of the speed of light to a principle. In this sense, giving a mass to the photon would mean going against this principle. However, we also know that transformations between inertial systems can be obtained based on the principles of relativity, isotropy and causality. So there are two types of groups (excluding space-time inversions)
which simultaneously satisfy this principles: Galileo's and Lorentz. In the last $c$ appears as limit speed that cannot be exceeded and which, a priori, is not related to the speed of light. Despite this, however, we choose to consider the Lorentz transformations and to discard those of Galileo due to the fact that, since Maxwell's equations must be invariant, the speed of light must remain the same in all inertial reference systems. So, even in this way, Proca's hypothesis seems to make no sense. Can you explain to me why the Proca hypothesis makes sense?
Furthermore, in this perspective, since the speed of light can no longer be the limiting speed in the Lorentz transformations, what meaning should we give to $c$?
| In this scenario, $c$ would still be the maximum possible velocity at which information can propagate locally, i.e. in a piece of spacetime small enough to be considered Minkowskian, even if no information would actually travel with that speed.
The parameter $c$'s existence does not require light in Einstein's massless sense: that was just the heuristic he used to derive it. The relevant physical postulate that can be used instead of anything related to light directly is that there is an upper limit to the speed or, if you like, a lower bound on the per-distance latency, of transmission of information from one spatial point to another.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/736522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Velocity in power calculations in different inertial frames In calculating power using the formula $\underline{F}\cdot\underline{v}$, what is the correct velocity to use? Does one use the velocity of the body on which the force is acting, or the velocity of the body providing the force? I always thought it was the former (at least because in the case of force fields the field doesn't have a velocity, so the only velocity is that of the body the force is acting on).
However, when I use this understanding on an example problem I seem to end up with results about power calculations in different reference frames that I am struggling to make sense of. I have posted this question here for you to see the numbers.
Any clarity people can provide on this point (either in general or in specific relation to the example question I posted) would be much appreciated.
| Given a reference frame, the total power of a force field acting on a system is the sum of the dot product of each force and the velocity of the point where the force is acting
$\displaystyle P = \sum_i \mathbf{F}_i \cdot \mathbf{v}_i$,
being $\mathbf{F}_i$ lumped forces. If you deal with continuous distribution of forces per unit volume $\mathbf{f}$ in volume $V$, forces per unit stress $\mathbf{t_n}$ on surface $S$ or force per unit-length $\boldsymbol{\gamma}$ on line path $\Gamma$, summation is replaced by integration over the corresponding domains,
$\displaystyle P = \sum_i \mathbf{F}_i \cdot \mathbf{v}_i + \int_V \mathbf{f} \cdot \mathbf{v} + \int_S \mathbf{t_n} \cdot \mathbf{v} + \int_{\Gamma} \boldsymbol{\gamma} \cdot \mathbf{v} $.
Power of forces on a rigid body performing translation, not rotation. In this situation, the velocity of all the points of the body is constant, $\mathbf{v}(\mathbf{r}) = \overline{\mathbf{v}}$, and thus the power becomes
$\displaystyle P^{transl} = \left[ \sum_i \mathbf{F}_i + \int_V \mathbf{f} + \int_S \mathbf{t_n} + \int_{\Gamma} \boldsymbol{\gamma} \right] \cdot \overline{\mathbf{v}} = \left[ \mathbf{F}^{tot,lump} + \mathbf{F}^{tot,V} + \mathbf{F}^{tot,S} + \mathbf{F}^{tot,\Gamma} \right] \cdot \overline{\mathbf{v}} = \mathbf{F}^{tot} \cdot \overline{\mathbf{v}}$.
Kinetic energy theorem. The kinetic energy theorem states that the time derivative of the kinetic energy of a closed system equals the total power of the forces
$\dot{K} = P^{tot}$,
see https://physics.stackexchange.com/q/735204.
Kinetic energy theorem and change of reference frames. When you change the reference frame used to evaluate the position and the velocity, both the kinetic energy and the power of forces change, but the kinetic energy theorem still holds.
See https://physics.stackexchange.com/q/734777
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/736670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Why is the mass of small elements taken as $∆m$ in center of mass of a continuous body? A continuous body has continuous distribution of mass. Doesn't $\Delta m$ mean $m_f - m_i$? But, is the mass Changing? If yes, how is the mass varying? Why is the mass of the small elements in a body taken as $\Delta m$? Why isn't it taken just as $m$ (mass of the small element)?
| You are right: it is a confusing notation. Usually it is used to "construct integrals", where it represents a finite-sized small chunk of the body, which is later assumed to go to zero size when a limit is taken. I prefer to use something like $m_n$ for these "chunk masses", and reserve the $\Delta$-notation for coordinate variables in which your "final minus initial" idea makes sense.
For example, consider a bar of length $L$ which has a changing linear mass density such that if you place one end of the bar at the origin of the $x$ axis, the density is given by $\lambda(x) = \lambda_0 + a x \; (0 \le x \le L)$. What is the mass of the bar?
We start by dividing the bar into a finite number of equal-length chunks, $N$ (which you should think of as something like 8, so you can draw a picture and identify one of the chunks, $m_n$). Then we give an approximate expression for the total mass, $M$, which will become exact once we find a Riemann sum, and take the limit as $N \rightarrow \infty$:
$$
M \approx \sum_{n=1}^N m_n
$$
Notice that I have labeled the 8 chunks $m_1, m_2, \ldots, m_8$. These are the $\Delta m$ chunks that confused you. Now express the mass of an individual chunk using the density (at the position of the $n$th chunk, $x_n$) and the length of the $n$th chunk:
$$
M \approx \sum_{n=1}^N \lambda(x_n) L_n
$$
The length of each chunk is $L_n = L / N$, but on our chosen coordinate axis, it is more useful to express it as $L_n = \Delta x_n = x_{n+1} - x_n = \Delta x$. This lets our approximate expression take the form of a Riemann sum, of which we can then take a limit:
\begin{align}
M &\approx \sum_{n=1}^N \lambda(x_n) \Delta x\\
M &= \lim_{N\rightarrow \infty} \sum_{n=1}^N \lambda(x_n) \Delta x\\
&= \int_0^L \lambda(x) dx\\
&= \int_0^L \left( \lambda_0 + a x\right) dx\\
& = \lambda_0 L + \frac{a L^2}{2}
\end{align}
This idea can be generalized to higher dimensions where either
$$
m_n = \sigma(x,y) \Delta A_n = \sigma(x,y) \Delta x \Delta y
$$
or
$$
m_n = \rho(x,y,z) \Delta V_n = \rho(x,y, z) \Delta x \Delta y \Delta z
$$
and to other, e.g., spherical, coordinate systems where you will need to construct $\Delta V$ in terms of those coordinates.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Connection between parameter space and configuration space I am just wondering what is the connection between parameter space and configuration space (or phase space)? I know the connection between configuration space and phase space but it seems like any paper/source I see uses one of "parameter space" or "configuration space" but I see no where is it explicitly stated whether or not the two are the same or the relationship between them if they are different.
| My understanding is that "parameter space" is a generic name that could apply for both configuration and phase space (or really any other kind), while configuration space exclusively refers to the space of the genealised coordinates. It might be the case that authors tend to avoid refering to phase space as "parameter space" while not doing the same for configuration space, perhaps because phase space is more often utilised.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does free molecular flow have a newtonian force in between 2 chambers? If a vacuum chamber is in free molecular flow pressure say $10^{-7}$ Pascal, and another container is added to it that is at $10^{-9}$ Pascal, will they reach pressure equilibrium eventually? At higher pressures, say between $101$ kPa (1 atm) and $50$ kPa, the flow between them is laminar flow. Therefore there is the Newtonian equation of (101 kPa - 50 kPa)/m^2 which gives 1x51E^7 newtons of suction force from 1 chamber to the next. Does the same happen during free molecular flow?
| Like the continuum flow case, the two connected chambers will eventually reach equilibrium and there will be a net force pushing gas from the high pressure chamber to the low pressure chamber until that occurs.
Let's consider flow from one chamber to another. The momentum carried by an average individual molecule* is
$$p = m u_{\perp} = \sqrt{m k_B T} ,$$
where $m$ is its mass, $u_{\perp}$ is velocity perpendicular to the opening, $k_B$ is Boltzmann's constant, and $T$ is temperature in the source chamber. The number of molecules passing through the opening per unit time is
$$Q = N A u_{\perp} = N A \sqrt{\frac{k_B T}{m}} , $$
where $N$ is molecule number density. Therefore, the force associated with the flow in one direction will be
$$F = p Q = k_B N T A = P A , $$
where $P$ is pressure in the source chamber. This is the expression for the continuum case. However, none of this presupposes collisions between molecules, so it is also valid for free molecular flow. Ultimately, this force will be supplied by the walls of the chambers.
*A more careful derivation would integrate over the Maxwell-Boltzmann distribution.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How can we say that work done by carnot engine in a cycle equals net heat released into it even when it is operated b/w 2 bodies and not 2 reservoir? When a carnot engine is operated between 2 reservoir then after each cycle it return to its initial state so change in internal energy is zero and so work done by it equals net heat released into it. But suppose it is operated between 2 bodies so when higher temperature body releases heat into carnot engine and the engine releases heat into lower temperature body the temperature of bodies will change (unlike the reservoir). So how can work done by carnot engine still equals net heat released into it as given in the example 13.6 of book 'Concepts in Thermal Physics'?
| The process you are describing is not a cycle. So depending on where you start your process, the final internal energy of the working fluid (ideal gas) may not be equal to the starting internal energy of the working fluid, and the working fluid will have done a different amount of work than the amount of heat it received. But, if you start your first cycle on one of the adiabats, and at the same final temperature that the two reservoirs and working fluid attain, the change in internal energy of the working fluid will be zero, and the heat will exactly match the work. In any event, what we usually assume (tacitly) in a case like this is that the change in internal energy of the working fluid is negligible compared to the overall work and heat.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Why does a small thermocol ball fall slower than a metal ball of the same volume and surface area (air resistance equal)? Suppose a thermocol ball and a metal ball of same volume and surface area (but different masses, obviously) are dropped from the same height from rest. The acceleration due to gravity is 'g' and the air resistance is also same in both the cases, then why is it that the metal ball reaches the ground first?
| I offer a nearly-zero-mathmatics answer:
The acceleration due to gravity is $g$
Here is your misconception. $g$ is the acceleration due to gravity in free fall. When there is air resistance or other forces involved, you don't have perfect free fall, in fact you may have nothing like free fall, for example if the object is buoyant.
However, $g$ is still a really handy number to have lying around. Because we can use it in formulas like
$$F = mg$$
Which gives us the force due to gravity, AKA the weight. And that's pretty convenient when you consider that g has approximately the same value at all practical altitudes anywhere on earth - depending how precise you need to be.
The reason that $g$ is the same for all objects is that, if you double the mass of an object, you double the amount of force needed to accelerate it, but you also double the force due to gravity, so these two doublings cancel out and the resultant acceleration is the same. $F=mg$ is simply derived from $F=ma$ where $g$ represents a constant value of $a$ if there are no other forces present.
The ball with the higher mass will have the higher weight, weight is the force which is overcoming the air resistance, so it will accelerate more rapidly. It will also reach a higher terminal velocity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Lorentz contraction using odometers? In principle, would cars moving between a pair of points at different speeds show different odometer readings due to Length contraction? When we use odometers to measure length between two points, what can we say of length contraction?
| You ask what can be said in principle about the measured distance in that situation, and the answer to that is "nothing". The principle of relativity only says that Lorentz-boosting the whole system can't change the physics. Lorentz-boosting a system consisting of a car driving on a road at 100 km/h gets you another system consisting of a car driving on a road at 100 km/h. The principle of relativity says nothing about the relationship between a car driving on a road at 100 km/h and a car driving on a road at 1,000,000,000 km/h. Those situations are not physically equivalent at all.
An odometer that naively counted the number of wheel rotations and multiplied by a constant would probably report different distances at different speeds, but the relationship depends on tricky details of the wheel-road interaction and can't be derived from universal principles. In a world where relativistic car speeds were possible, car odometers probably wouldn't be constructed in that naive way, because it would defeat the purpose of having an odometer. E.g., if you want to use the odometer reading to determine your current location on a road map, you need it to report proper road length. It is possible (not forbidden by the principle of relativity) to construct a device that measures proper road length, so they probably would.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/737774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is the correct gravitational potential energy of a single particle in an $N$-body system? I am aware that the total gravitational potential energy of a system of $N$ particles is given by pairwise interactions, i.e., you start with a single particle in the system, and then calculate the work done (negative for an attractive force) to bring in every other additional particle. Like this:
$$U_{total}=-G\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}\frac{m_im_j}{r_{ij}}\tag{1}$$
However, does it make sense to talk about the gravitational potential energy of a single particle? Something like this:
$$U_i=-Gm_i\sum_{j=1,j\neq i}^{N}\frac{m_j}{r_{ij}}\tag{2}$$
However, as can be seen from equation 1, summing over these "individual" gravitational potential energies would result in pairwise interactions being counted twice. Thus, would this:
$$U_i=-Gm_i\frac{1}{2}\sum_{j=1,j\neq i}^{N}\frac{m_j}{r_{ij}}\tag{3}$$
... be a correct equation for the gravitational potential energy of the $i^{th}$ particle in an N-body system? At the very least, using equation 3 to calculate the potential energy of each particle would result in the correct total potential energy for the system when summing the inividual energies of the particles.
Any insight would be much appreciated.
| Equation (1) is relevant if you're studying the evolution of the whole system. For example, you can use it to construct the conserved energy or the Lagrangian.
Equation (2) is relevant if you're studying the kinematics of the subject particle over a short time period (so you neglect motion of the rest of the system). For example, its gradient (with respect to $\vec r_i$) tells you the instantaneous force on your particle. This equation might be used when numerically integrating the system's evolution.
I can't think of a context where equation (3) is useful, except as a computation step toward evaluating equation (1).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/738022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Why the closest to the nucleus electron has lower energy (contrary to Heisenberg)? According to shell theory the lowest energy electrons are on the $s$ level and are closest to the nucleus. This means they are localized in a smaller volume then other outer shell electrons and according to Heisenberg uncertainty relation they may have higher momentum $p$ and energy $E$. But it is on the contrary? How is this to be explained from physical (as less math as possible) point of view? Why is at all Eo constant if HUP is in power?
|
Why the closest to the nucleus electron has lower energy (contrary to Heisenberg)?
The first has nothing at all to do with the second. The energy content of the electron is determined by the energy absorption at a photon absorption or the energy release at a photon emission. Empirically it was determined that the electron with increasing photon absorption is further and further away from the nucleus and vice versa. And that there is a limit for the proximity to the nucleus, at reaching which no further approach takes place (unless one includes neutron stars in the consideration).
Heisenberg expresses only that with a measurement with our much too coarse measuring means impulse and position are not measured at the same time arbitrarily exactly.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/738166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Derivation of the braking torque induced by Eddy current in a rotating disc This question was cross-posted here, but I didn't receive an answer. So, I thought that, maybe, there was a missing physics assumption, which makes me post the same question here.
In the article On Eddy Currents in a Rotating Disk, the function U is defined as the stream function of the eddy current flowing through a rotating disc of radius r.
Starting from Eq. (20),
$$
U = \frac{\omega r c b \gamma \Phi \sin{\theta}}{2\pi} \left(1 - \frac{A^2 a^2}{c^2 r^2 + A^4 - 2 r c A^2 \cos{\theta}} \right)
$$
I am a mechanical engineer, so I would like to understand, in the simplest way, how the symbolic definite integration with respect to θ in the following definition of the torque T
$$
T = \frac{\Phi}{\pi a^2} \int_{c-a}^{c+a} \int_{-\theta_1}^{\theta_1} r \frac{\partial U}{\partial r} r dr d\theta
$$
led to
$$
T = \frac{\omega c b \gamma \Phi^2}{\pi^2 a^4} \times \int_{c-a}^{c+a}
\left( r^2 \sin{\theta_1} - \frac{a^2 A^2 r^2 \sin{\theta_1}}{c^2 r^2 + A^4 - 2 A^2 r c \cos{\theta_1}} \right) dr
$$
where $\theta_1$ and $r$ are connected by the relation $r^2 + c^2 — 2 r c \cos{\theta_1} = a^2$.
P.S. I tried to symbolically evaluate the integration with respect to $\theta$ using the Wolfram Engine, and it gives me a numeric value of zero, not the reported answer above.
| First off, to make things more simple, I would integrate by parts in $r$:
$$
T = \frac{\Phi}{\pi a^2}\left(\int_{-\theta_1}^{\theta_1}d\theta [Ur^2]_{c-a}^{c+a}-\int_{c-a}^{c+a}dr 2r \int_{-\theta_1}^{\theta_1}d\theta U \right)
$$
so you just need to perform at fixed $r$ the integral:
$$
\int_{-\theta_1}^{\theta_1}d\theta U
$$
this is done by the change of variables $\theta \to z=\cos \theta$ since schematically:
$$
U = A\sin\theta \left(1-\frac{B}{C-\cos \theta}\right) \\
\int d\theta = \int dz A \left(1-\frac{B}{C-z}\right) \\
=[A(z +B\ln(C-z))]
$$
with $A,B,C$ $r$ dependent constants. After calculating the antiderivative, you just need to evaluate at the boundaries.
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/738420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the difference between mechanics and analog? What is the difference and the relationship between mechanics and analog/analogue? I have noticed that mechanical things are often considered analog.
Note: The difference between digital and analog is clear to me.
| "Mechanical" refers to the physical technology; "analog" refers to the nature of the processed signal.
As @John-Doty says in his answer, analog signals/data have continuous values, digital signals/data have descrete values. Both analog and digital computers can be (in whole or in part) electronic, mechanical, optical,
biological, ….
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/738881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
How to derive that total kinetic energy is conserved during the collision?
*
*How to derive the below equation from scratch?
*What law support this equation?
*What is the name for it?
*Does it refer to the conservation of kinetic energy?
Like law of conservation of momentum derived from newton's 3rd law.
$\frac{1}{2}m_1v_1^2 + \frac{1}{2}m_2v_2^2 = \frac{1}{2}m_1v_1'^2 + \frac{1}{2}m_2v_2'^2$
Where $U_1$ , $U_2$ initial velocities before collision of masses $m_1$ and $m_2$ and $V_1$ , $V_2$ final velocities after collision. This equation related to classical mechanics. And the above equation used for finding $V_1$ and $V_2$ for elastic collision in 1 dimension.
| Suppose that $m_1$ and $m_2$ are interacting with each other via some conservative force with a potential of the form $\phi(|x_2-x_1|)$. Then the Lagrangian is $$\mathcal{L}= \frac{1}{2}m_1 \dot x_1^2 + \frac{1}{2} m_2 \dot x_2^2 -\phi(|x_2-x_1|)$$
Since $\mathcal{L}$ does not depend explicitly on time then per Noether's theorem there is a conserved energy. It is given by $$\mathcal{H}=\sum_i \dot x_i \frac{\partial \mathcal{L}}{\partial \dot x_i}-\mathcal{L} $$$$\mathcal{H}= \frac{1}{2}m_1 \dot x_1^2 + \frac{1}{2} m_2 \dot x_2^2 +\phi(|x_2-x_1|)$$
Now suppose further that for $|x_2-x_1|>R$ we have $\phi(|x_2-x_1|)=0$ meaning that beyond a certain distance $R$ the energy stored in the potential goes to zero. Then if we start and end outside of that distance, $R_{initial}>R$ and $R_{final}>R$, we have $$\frac{1}{2}m_1 u_1^2 + \frac{1}{2} m_2 u_2^2 = \left. \mathcal{H} \right|_{R_{initial}} = \left. \mathcal{H} \right|_{R_{final}}=\frac{1}{2}m_1 v_1^2 + \frac{1}{2} m_2 v_2^2$$ where $u_i = \left. \dot x_i \right|_{R_{initial}}$ and $v_i = \left. \dot x_i \right|_{R_{final}}$
So this result holds whenever there is such a conservative interaction with a short range. Such interactions are called elastic collisions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/739228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How is thermal energy split between kinetic energy and potential energy? Internal ("thermal") energy must be some combination of kinetic energy and potential energy, although most discussions of internal energy mention only the kinetic energy. However you also have potential energy -- as particles collide, they reach a minimum separation when they momentarily stop and all the energy is potential -- like when a bouncing ball hits the floor and, for an instant, all the energy is elastic potential energy in the squeezed ball. Then the particles get pushed apart, as that potential energy converts to kinetic. So, is the internal energy evenly split between kinetic and potential?
Or, should we be considering the virial theorem here and say that internal energy consists of twice as much potential energy as kinetic energy?
| In thermal equilibrium, the energy is equally distributed among different types. More precisely, any quadratic term in the energy per particle has average energy of $kT/2$ where $T$ is the temperature of the system and $k$ is the Boltzmann constant. This is the so-called equipartition theorem.
A diatomic molecule that translates, rotates and vibrates has average energy of $7kT/2$, being $3/7$ for translations, $2/7$ for rotations, $1/7$ for vibration and $1/7$ for elastic potential.
The only caveat about this is that depending on the energy scale, some degrees of freedom are "frozen". For example, at sufficiently low temperature a diatomic molecule may not vibrate which means there is no energy in this mode. Any "active" mode has the same energy per molecule.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/739490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 0
} |
Why is the internal energy discontinuous in a first-order phase transition? Using the Ehernfest classification where first order phase transitions are those where the 1st derivative of the free energy has a discontinuity, I can follow why the entropy and volume are discontinuous $S=-\frac{\partial G}{\partial T}$ & ${V}=\frac{\partial {G}}{\partial P}$. However it is not to clear to me what relationship results in the discontinuity for the internal energy.
| Once we have a closed thermodynamic system having a discontinuity of the first derivatives of the Gibbs free energy $G$ as a function of temperature and pressure, the discontinuity of the internal energy $U$ as a function of the same variables is a trivial consequence of the relation between $G$ and $U$:
$$
U(T,p)=G(T,p)+TS(T,p)-pV(T,p)
$$
$G(T,p)$ is a continuous function of the variables $T$ and $p$ (it is a consequence of its concavity). Therefore, the discontinuities of $S$ and $V$ imply the discontinuity of $U(T, p)$.
Notice that the functional dependence on $p$ and $T$ is essential. The internal energy, as a function of $S$ and $V$, is continuous. A rarely stressed side remark on Ehrenfest classification is that it is based on the order of derivatives of a thermodynamic potential as a function of the intensive variables only.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/740404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Hydrogen wave function for electron orbitals I am a little confused about the quantum mechanics wave function. Hydrogen has a single electron in the first shell n=1, in the first subshell 1s with positive spin. In the attachment the wave function is represented for each unique quantum number n,l,m where n is the respective shell, if the electron only can exist in the first shell n=1 how is it possible to determine a probability density to find the electron in the other shells? Is that because the electron can get 'excited' into the other shells for a very short time?
Sorry if my question seems odd, I have a hard time grasping these new concepts!
| When the one electron is in the $n=1$ shell the atom is said to be in the ground (lowers energy) state.
That one electron can be in one of the other shells and you would then have an excited atom.
So the one electron could occupy any one of the shells and the diagrams show the probability of the electron being at a particular location within the shell it is occupying.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/740549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is Hubbles law due to Gravity? Hubble's law states that Distance is proportional to Velocity. A ScienceDirect article states that Classical Hubble expansion is characterized by a proportional increase in the rate of expansion groups based on the distance from the main center of gravity
So is it due to gravity?
| Hubble's law describes the expansion of the universe. Is it due to gravity? Well gravity is the interaction by which space time responds to energy soruces (and therefore does anything at all), so yes. But you can't just say that any hypothetical universe containing gravity will therefore automatically obey Hubble's law.
This depends on the constituents of the universe and also which "part of spacetime" you're interested in. There are some details about the present epoch (such as matter dominating over radiation) which have been shown to inevitably follow from more basic properties of the early universe. But there are also plenty where we are far from having a theoretical explanation. One of these is dark energy which has a significant effect on what Hubble's law will look like in the future.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/740896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Why the derivative of the coordinate of a volume control is not zero? When deducing the Navier-Stokes equation, for conservation of momentum, in an Eulerian frame (a control volume)
the derivative of fluid velocity $U_{(t)}$ is calculated
$$\frac{\mathrm{dU} }{\mathrm{d} t}=\frac{\partial U}{\partial t}+\frac{\partial U}{\partial x}\cdot\frac{\partial x}{\partial t}$$
Then $\frac{\partial x}{\partial t}$ is replaced with $\frac{\partial x}{\partial t}=U_x$, where $U_x$ is the $x$ component of $U_{(t)}$
I do not understand how $\frac{\partial x}{\partial t}=U_x$.
$x$ is a coordinate of a volume control, which is static in time. It does not moves, so $x$ cannot be a function of time $t$, therefore, it should be $\frac{\partial x}{\partial t}=0$
How can the variation of the coordinate $x$ of the control volume be equal to the velocity $U_x$ of the fluid?
| Given a (scalar) quantity $\varphi(\mathbf x,\,t)$ that exists in a continuum and has a macroscopic velocity represented by the vector field $\mathbf{u}(\mathbf{x},\,t)$. Then via the chain rule,
$$\frac{\mathrm{d}\varphi}{\mathrm{d}t}=\frac{\partial\varphi}{\partial t}+\dot{\mathbf{x}}\cdot\nabla\varphi.$$
Here, $\dot{\mathbf{x}}\equiv\mathrm{d}\mathbf{x}/\mathrm{d}t$ represents the time derivative of a chosen path in space, $\mathbf{x}(t)$, and, at this point, we have two seemingly obvious choices on the path:
*
*$\dot{\mathbf{x}}=0$.
*$\dot{\mathbf{x}}=\mathbf{u}$ (where $\mathbf{u}$ is the fluid velocity).
In the first choice, the total time derivative is equal to the partial time derivative. While in the second choice, we end up with the material derivative, in which the path of interest follows the fluid velocity field--this latter option, of course, is more useful for analyzing fluid flows and so it is the common choice.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/741041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Converting the weight into thermal energy Can we convert the weight into thermal energy? For example we would have a device like scale that if we stand on it, our weight can be converted into the thermal energy. Is this possible?
| There is no way to convert your weight into heat, unless you were made of plutonium and fashioned into a bomb. when you blew up, a fraction of your mass (not weight) would be converted into heat.
Here is the closest you can come to this goal without using plutonium:
You climb a ladder to the top. at the bottom is a bucket of water with a propeller sticking into it, and a clever mechanism attached to a rope. By stepping off the ladder and grasping the rope, the mechanism lowers you slowly to the ground while spinning the propeller, which warms up the water slightly.
What you have done here is to store energy as potential energy associated with your height off the ground. you then convert it into kinetic energy of the spinning propeller, and then convert that into heat via friction in the water. *
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/741399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Given a magnetic field how to find its vector potential? Is there an "inverse" curl operator? For a certain (divergenceless) $\vec{B}$ find $\vec{A} $ such that $\vec{B}= \nabla \times \vec{A} $.
Is there a general procedure to "invert" $\vec{B}= \nabla \times \vec{A} $? An inverse curl?
(I was thinking of taking the curl of the previous equation:
$$ \nabla \times \vec{B}= \nabla \times \nabla \times \vec{A} = 0. $$
Then using the triple cross product identity $ \nabla \times \nabla \times \vec{V} = \nabla (\nabla \cdot V) - \nabla^2 V$ but that does not quite simplify things... I was hoping to get some sort of Laplace equation for $\vec{A}$ involving terms of $\vec{B}$.)
| You were very close in taking the curl and looking for a Laplace (actually Poisson) equation for $\mathbf{A}$.
You are allowed to assume that $\boldsymbol{\nabla} \cdot \mathbf{A} = 0$ (Coulomb gauge, as mentioned in doublefelix's answer). Then, using your triple cross product identity, you get $\boldsymbol{\nabla} \times \mathbf{B} = -\nabla^2\mathbf{A}$.
You can solve this for $\mathbf{A}$ using the Green's function $-1/4\pi r$ for the Laplacian, just as in electrostatics. This will give the Helmholtz result for $\mathbf{A}$ as in Nullius's answer (the $\Phi$ part of the Helmholtz decomposition will be zero here because $\boldsymbol{\nabla} \cdot \mathbf{B} = 0$).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/741557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Linear velocity is cross product of angular velocity and position
Why is linear velocity is cross product of angular velocity and position?
| If you take $\bf d\theta$ to be the (infinitesimal) angle swept about the rotation axis (about axial vector $\bf\omega$ in diagram) after a small displacement $d\bf r_j'$ then looking at the diagram, you can form the cross product $$\tag 1d\bf r_j'=d\theta\times r_j'$$
If this displacement occurs in a time interval given by $dt$ and you divide both sides of equation (1) by $dt$ you get $$\bf\frac{d\bf r_j'}{dt}=\frac{d\theta}{dt}\times r_j'$$ or $$\bf v=\omega\times r$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/741853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Why doesn't a double slit act like a single slit? Well, a single slit can be considered a continuous array of sources, and thus its spectrum is different than that of a double slit. But why is a double slit so different from a single slit if it is just 2 single slits. Also, if the waves coming from the slits of the double-slit setup act like the wave from a single slit how can they form different patterns?
| The double slit is missing the waves coming from the blockage between the slits. So yes, it's an array of sources, but it's not the same array.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/741982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
What would a standing wave of light look like? I want to know what a standing wave of light would like and what properties it might have that are interesting.
| The light in a laser cavity is standing wave. If we measure the intensity of light, we will get an image like this:
This picture is from Chang H C, Kioseoglou G, Lee E H, et al. Lasing modes in equilateral-triangular laser cavities[J]. Physical Review A, 2000, 62(1): 013816.
But if we observe the cavity with eyes, in fact we just get the light escaping from the cavity, it looks like:
This picture is from Guidry M A, Song Y, Lafargue C, et al. Three-dimensional micro-billiard lasers: the square pyramid[J]. EPL (Europhysics Letters), 2019, 126(6): 64004.
Here we used a microscope camera to take the photo.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/742157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 6,
"answer_id": 5
} |
Why isn't amplitude modulation used more often in magnetic resonance technologies? Optically pumped magnetometers utilize a visible light carrier wave which is amplitude modulated down to a Larmor frequency of ~1000 Hz. This is in contrast to the vast majority of magnetic resonance implementations where the carrier frequency is equal to the Larmor frequency.
Why are there not more technologies that exploit amplitude modulation at the Larmor frequency?
This seems like it would be an easy way to excite spins over a broad frequency range without retuning the resonator.
Amplitude modulation seems to satisfy the Bloch equations and excite spins just fine, so why not more widespread use?
| Because AM needs a strong carrier and whose modulation rate is much less than the carrier frequency. In NMR, as mentioned in the answer of @AadhavVenkatesan, the "carrier" is actually a constant bias field. So the closest to AM one can have with that kind of background is a pulsed or On/OFF keying type modulation, and this is in fact what is used.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/742654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
When two gas molecules collide, can they send out an IR photon? When a ball bounces on the ground, each bounce is smaller than the previous one because of friction in the system, i.e. the collision between the ball and the ground is not completely elastic. We are taught that the kinetic energy lost in an inelastic collision is turned into heat.
What about when two gas molecules collide in the air? Is their collision always elastic, or does part of their kinetic energy sometimes turn into heat? And if so, does this heat leave the collision in the form of an IR photon?
| Yes, it is common for colliding molecules to emit IR and microwaves.
When molecules collide some of the kinetic energy may go into rotational and vibrational excitations, so that after the collision one or both of the molecules are in an excited state. Then the molecules decay by emitting a photon. Typically the decay of rotational excited states emits microwave photons and the decay of vibrationally excited states emits infra-red photons.
At room temperature the kinetic energy is usually too low to excite vibrational states, so the gas would emit only microwave radiation from the decay of the rotational modes. Typically you need to heat the gas to a few hundred degrees C to see much infra-red emission.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/742784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why can infinite quantities not be shown in an experiment or observed in physics? To modern physicists knowledge, there are no truly infinite quantities that can be shown with an experiment or observation. Time is not infinite, it had a beginning. Matter and energy is finite (otherwise there would be a giant black hole instead of an Earth). Space could be infinite or finite depending on the geometry of spacetime, but there is no way to measure or travel an infinite distance. The observable universe has a defined limit, the cosmic light horizon.
Some classical mechanics equations give infinite, but with the discovery of quantum mechanics, the infinite are revealed to have been due to probabilistic effects, not a true infinite quantity.
Why can we never observe infinite quantities?
|
Why can we never observe infinite quantities?
All physical measuring/observing devices (including our own senses) are constructed from a finite number of parts with a finite number of states and carry out a finite number of processes in a finite amount of time. Therefore it is impossible to design or construct a device that can measure or observe an infinite quantity.
We might use a mathematical model of reality that predicts an infinite quantity in some scenario. This is usually taken as an indication that the model "breaks down" or does not apply to that scenario, because we assume that physical quantities do not exist in reality. But you seem to be asking about observations and measurements rather than existence, which is a different question.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/743095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
Why $F = m(v_f - v_0)/2$? Force is directly proportional to mass and velocity and inversely proportional to time so why don't we write $F=1/t+m+v-v_0$ where $m$ is mass, $v$ is final velocity, and $v_0$ is initial velocity?
| By Newton's second law force is defined as mass times acceleration. Acceleration is defined as the time derivative of velocity, and velocity is defined as the time derivative of position. Force, acceleration, velocity, and position are vectors.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/743353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Different radiation intensities from a black body I am a high school student and was wondering about the radiation curve of a black body. Why do the emitted wavelengths from a black body have different intensities? What happens at the atomic level that makes some wavelengths stronger than others resulting in a radiation peak?
| TL;DR: The density of electromagnetic modes increasing with frequency, while the number of photons contained in each mode drops with frequency.
According to the Planck's law, the spectral radiance is given by
$$
B(\nu, T)=\frac{2\nu^2}{c^2}\frac{h\nu}{\exp\left(\frac{h\nu}{k_B T}\right)-1}$$
The factor $\nu^2$ originates from the density of states, i.e., the density of electromagnetic modes per frequency interval, which increases with frequency, $\nu$. On the other hand, factor $h\nu/\left[\exp\left(\frac{h\nu}{k_B T}\right)-1\right]$ is nearly constant at low frequencies and decays expinentially when $h\nu\gg k_BT$: $h\nu/\left[\exp\left(\frac{h\nu}{k_B T}\right)-1\right]\approx h\nu\exp\left(-\frac{h\nu}{k_B T}\right)$. Thus, at small frequencies the radiance increases, but then drops - hence there is a peak in between. In essence, it is the competition between the density of electromagnetic modes increasing with frequency, while the number of photons contained in each mode drops with frequency.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/743754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to define the inverse of Dirac Gamma Matrices in QFT? The Dirac gamma matrices are a set defined by the 16 following matrices:
$$\Gamma^{(a)}=\{I_{4x4},\gamma^\mu,\sigma^{\mu\nu},\gamma_5\gamma^\mu,\gamma_5\}.\tag{2.122}$$
Now, I wish to determine the inverse set of gamma matrices, $\Gamma_a$.
According to Ashok Das' Lectures on QFT page 58 equation 2.124, the inverse should be defined as:
$$\Gamma_{(a)}=\frac{\Gamma^{(a)}}{Tr(\Gamma^{(a)}\Gamma^{(a)})}\qquad a \text{ not summed}.\tag{2.124}$$
But I don't understand where this comes from, or why it makes sense. If I pick any gamma matrix, say $\gamma_5=\begin{pmatrix}
0 & I_{2x2} \\
I_{2x2} & 0
\end{pmatrix}.$
I can calculate
$$\Gamma_a=\frac{\begin{pmatrix}
0 & I_{2x2} \\
I_{2x2} & 0
\end{pmatrix}}{Tr(\begin{pmatrix}
0 & I_{2x2} \\
I_{2x2} & 0
\end{pmatrix}\begin{pmatrix}
0 & I_{2x2} \\
I_{2x2} & 0
\end{pmatrix})}=\frac{\begin{pmatrix}
0 & I_{2x2} \\
I_{2x2} & 0
\end{pmatrix}}{Tr(I_{4x4})}=\frac{1}{4}\begin{pmatrix}
0 & I_{2x2} \\
I_{2x2} & 0
\end{pmatrix}$$
But here, clearly $$\Gamma_a\Gamma^a=\frac{I_{4x4}}{4},$$ which is not what I expect.
So how is this properly used? How does one define the inverse Dirac Gamma Matrices?
| Yes, Ashok Das should strictly speaking not call (2.124) the "inverse set of matrices"; they are only proportional$^1$ to the inverse. Rather (2.124) is (2.122) where the upper collective index $(a)$ of the 16 matrices (2.122) has been lowered by a metric $g_{(a)(b)}$. The (inverse) metric is here defined as
$$g^{(a)(b)}~:=~ {\rm Tr}(\Gamma^{(a)}\Gamma^{(b)}), \qquad a,b~\in~\{1,\ldots,16\},\tag{2.123}$$
which is diagonal. The explicit list of $\Gamma_{(a)}$ is given in (2.126).
--
$^1$ It is straightforward to check this explicitly by going through the list.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/743883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why are generators of the Lorentz group antisymmetric, while boost matrices are symmetric? We know that a Lorentz boost can be written as
$$
\begin{aligned}
x_0^{\prime} &=\gamma\left(x_0-\beta x\right) \\
x^{\prime} &=\gamma\left(x-\beta x_0\right) \\
y^{\prime} &=y \\
z^{\prime} &= z,
\end{aligned}
$$
symmetric between X and t.
However, infinitesimally, it is included in
$$
\Lambda_{~~~\nu}^\mu=\delta^\mu{ }_\nu+\omega^\mu{ }_\nu,
$$
whose infinitesimal transformations amount to
$$
x^{\prime \mu}=x^\mu+\omega^\mu{ }_\nu x^\nu.
$$
Here
$$
\omega_{\mu\nu}=-\omega_{\nu\mu},
$$
antisymmetric.
Question: how is a symmetric boost transformation quantified by infinitesimal antisymmetric parameters?
| It's in the funny Minkowski metric. In point of fact, as a matrix, for a boost,
$$
\omega^\mu_{~~\nu} = \omega^\nu_{~~\mu},
$$
so it is symmetric, unlike the antisymmetric covariant object,
$$
\eta_{\mu\kappa} \omega^\kappa_{~~\nu} ~~~~~~~~\leadsto \\
\omega_{\mu\nu}= - \omega_{\nu\mu},
$$
as the lowering of the space-like indices pick up a sign w.r.t. the timelike index.
So, leaving the irrelevant y,z inert directions alone, your infinitesimal boost (~to lowest order in β) is but
$$
\begin{pmatrix}x^0 \\ x^1 \end{pmatrix} '= \begin{pmatrix}1&-\beta\\ -\beta & 1 \end{pmatrix}\begin{pmatrix}x^0\\ x^1 \end{pmatrix} =\left (I+ \begin{pmatrix}0& \omega^0_{~~1}\\ \omega^1_{~~0} & 0 \end{pmatrix}\right )\begin{pmatrix}x^0\\x^1 \end{pmatrix} ,
$$
since $\omega^0_{~~0}=0=\omega^1_{~~1}$.
To be sure, this mismatch miracle does not occur for rotations, which entail only spacelike indices, so the mixed tensor has the same antisymmetry as the covariant one.
*
*In conclusion, the antisymmetry of the covariant tensor $\omega_{\mu\nu}$ elegantly unifies rotations with boosts (hyperbolic rotations) by dint of the Minkowski metric. Neat, huh?
Clarification to comment question
Indeed, you don't understand the notation: The mixed tensor (one covariant and one contravariant index) is not always symmetric: only for the boost, but not for rotations. So,
for the boost,
$$
\omega_{0~1}=\eta_{0\kappa} \omega^ \kappa_{~~1}=\omega^ 0_{~~1}= \omega^ 1_{~~0}= -\omega_{1~0}\equiv b,
$$
but for a rotation,
$$
\omega_{2~1}=\eta_{2\kappa} \omega^ \kappa_{~~1}=-\omega^ 2_{~~1}= \omega^ 1_{~~2}= -\omega_{1~2}\equiv a.
$$
If we take $\omega_{0~2}=0$, and ignore the z direction, we have the mixed-symmetry mixed-tensor matrix ,
$$
\omega^ \mu _{~~\nu} = \begin{pmatrix}0 & b&0 \\ b&0 & a \\ 0& - a&0 \end{pmatrix},
$$
with the standard structure of the boost and rotation generators.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/744026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Does the formula for time period of a simple pendulum hold up for larger angles? When calculating the time period of a simple pendulum as an experiment in junior classes we used the formula
$$T = 2\pi\sqrt{\frac{l}{g}}$$
But recently seeing the derivation of the formula using Simple Harmonic Motion, I can't understand how it holds for such large angles since $\sin{\theta}$ gets approximated to $\theta$ which only holds true for very small values of $\theta$ (or as it would be written mathematically for $\theta \to 0$).
So, how, if at all, does this formula hold for the larger angles and why is it used for calculating time period of a pendulum during experiments?
| If you want to have an idea how far the result deviates from its small-angle approximation…
The pendulum's total energy is:
$$E=K+V=\frac{1}{2}m(l\dot{\theta})^2-mgl\cos(\theta)$$
Let's assume that the pendulum starts from an angle $\theta_0$ with no velocity. Conservation of energy yields:
$$\dot{\theta}^2=2\frac{g}{l}\bigl(\cos(\theta)-\cos(\theta_0)\bigr)$$
During the half-period where the pendulum swings from $-\theta_0$ to $\theta_0$, its angular velocity $\dot{\theta}$ is positive, so the half-period is:
$$\frac{T}{2}
=\int_0^{\frac{T}{2}}dt
=\int_{-\theta_0}^{\theta_0}\frac{d\theta}{\dot{\theta}}$$
which yields:
$$T=2\sqrt{\frac{2l}{g}}\int_0^{\theta_0}\frac{d\theta}{\sqrt{\cos(\theta)-\cos(\theta_0)}}$$
This integral can be computed numerically. Here's a graph of the period versus $\theta_0$:
As you can see, the approximation $T\simeq 2\pi\sqrt{\frac{l}{g}}$ keeps some validity until 1 to 2 rad.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/744252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Energy of compound harmonic oscillator Consider two harmonic oscillators of masses $m_1,m_2$ and spring constants $k_1,k_2$ respectively. Their motions are described by equations $$u_1=A_1\sin(\omega t+\varphi_1),\qquad u_2=A_2\sin(\omega t+\varphi_2).$$
Total mechanical energies of these two are given by $$E_1=\frac12k_1A_1^2,\qquad E_2=\frac12k_2A_2^2.$$ What can be said about energy $E$ of the compound harmonic oscillator $u=u_1+u_2$? In particular, is it true that $$E=E_1+E_2?$$ What if $\omega_1\neq\omega_2$ (in which case $u$ isn't harmonic or even periodic)?
| If you mean just the mathematical addition
$u1+u2=A(sin(\omega t+\phi_1)+sin(\omega t+\phi_2)=B*(sin(\omega t+\phi_3)$ you have $B=A*\sqrt{((sin(\phi_1)+sin(phi_2)^2+(cos(\phi_1)+cos(\phi_2))^2))}$
and $\phi_3=\arctan(\frac{sin(\phi_1)+sin(phi_2)}{cos(\phi_1)+cos(\phi_2)}$ so the energie is not added, bu the system is just another oscillator, not a physical composition of movements.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/744332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Second Law of Thermodynamics Restatement with usable energy instead of entropy Is it technically accurate to state the Second Law of Thermodynamics as:
"The total amount of usable energy only decreases in a closed system"
I ask because it doesn't evoke the term "entropy", which usually only confuses the average person.
|
Is it technically accurate to state the Second Law of Thermodynamics as: "The total amount of usable energy only decreases in a closed (actually, isolated) system"
Yes, we can give the following qualitative interpretation of the above statement without explicitly mentioning entropy or exergy.
A non-equilibrium system is generally characterized by gradients of temperature, pressure and chemical potential. The classical example is a rigid insulated box divided into two parts, each in its own temperature, pressure, and with its own composition. If we remove the partition under the presence of gradients there will be transfer of mass and energy between the two parts until $T$, $P$ and $\mu$ are uniform. This is a statement of the second law: the tendency of an isolated system is to reach a state in which no gradients of $T$, $%P$ or $\mu$ exist.
At the same time, useful work can be extracted from a thermodynamic system only in the presence of gradients. A Carnot cycle needs two different pressures, work requires a $\Delta P$, osmotic work requires $\delta \mu$. We can produce work from a non equilibrium partitioning of the system, but not from a system that has reached equilibrium, this would have to be placed into contact with some other system at different $T$, $P$ and $\mu$. The dissipation of gradients is what Kelvin termed the "heat death" of the universe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/744435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Using displacement field vs. electric field to calculate curl of magnetic field So let's say we have a medium with polarization $\vec{P} = \gamma \nabla \times E$, with no free currents or charges.
So we know that $H = \frac{1}{\mu_0} B - M$ , $D = \epsilon_0 E+P$ and $\nabla \ \times H =\mu_0 J_{free}+\frac{\partial D}{\partial t} $ here reduces to $\nabla \ \times H =\frac{\partial D}{\partial t} $
So since there is no free current I mistakenly thought that $\nabla \times B = \mu_0 J + \frac{\partial E}{\partial t}$ reducing to here $\nabla \times B = \frac{\partial E}{\partial t}$ but if I say that
$$\nabla \times B = \mu_0 \nabla \times H = \mu_0 \frac{\partial D}{\partial t} = \mu_0 \epsilon_0 \frac{\partial E}{\partial t} + \mu_0 \frac{\partial P}{\partial t} \not= \frac{\partial E}{\partial t}$$
so why do I need to use the displacement field here? this is like saying there is a current due to the polarization but what is this current? what am I missing here?
| Of course there is current due to changes in polarization. For polarization to change, charged particles have to change positions, and this motion means there is electric current.
In a dielectric with no magnetization, total current can be expressed as
$$
\mathbf J = \frac{\partial \mathbf P}{\partial t}.
$$
In a magnetic medium with no electric polarization, total current can be expressed as
$$
\mathbf J = \nabla \times \mathbf M.
$$
There is no universal formula for total current, it depends on the medium. In magnetic conductor in ohmic regime, total current is
$$
\mathbf J = \nabla \times \mathbf M + \sigma \mathbf E.
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/744707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Can I still use the table of Clebsh-Gordan coefficients if isospin isn't conserved, to calculate the branching ratio? the title is basically everything. For example, the interaction $\Lambda^0 \rightarrow \Sigma^+ + \pi^-$ or $\Lambda^0 \rightarrow \Sigma^0 + \pi^0$. Isospin isn't conserved but the interaction is still possible, right?
| The Λ has mass ~ 1.116 GeV, so below threshold for the
Σ(1.189) and π(0.140) system to decay to.
By contrast, the isosinglet Λ(1.405) is above that threshold and can and does decay to Σπ, strongly, 100% of the time. Isospin 1⊗1 of the products can, indeed, combine to an isosinglet, so isospin 0 is conserved. You can, and must use the C-G coefficients to compute the BRs. What do you find?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/744809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can water at 0 degrees Celcius and 1 atm have a range of different cooling abilities? Imagine we have two 1L containers of water, both at 1 atm, both at 0 degrees C. However, container 1 is at point (b) in the heating/cooling curve below, while container 2 is at point (c) in the heating curve below.
Then when mixing these two containers of water with two 1L containers of water at say 50 C, the resulting mixes would have different temperatures, right?
What I'm getting at is that water at 0 C can have a range of different values of specific heat, and those different values would result in the water being able to cool other things differently, right?
Perhaps to achieve liquid water with different amounts of specific heat one container would be just thawed and the other would be just melted?
| Point B corresponds to a 1 kg chunk of ice at 0C, and point C corresponds to 1 kg of liquid water at 0 C. It takes 333 kJ of heat to melt the ice and move from point B to point C. What temperature do you think mixing 1 kg of liquid water at 0 C with 1 kg of liquid water at 50 C would end up at. Do you think that mixing 1 kg of ice at 0 C with 1 kg of liquid water at 50 C would end up at the same temperature? Do you think that all the ice would even melt?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/744967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does water feel hotter at larger volume? Why does a 104°F pool/tub feel boiling hot, whereas a pot/cup of water at the same temperature does not feel hot at all.
(Normally a pot/cup of water won't be hot enough to cause one to immediately remove ones finger from the water till it's around 165-175°F).
Probably same conductivity, specific heat, BTU's (since the amount of BTU's needed to raise 1°F is proportion to volume of water. Meaning, 1 BTU is needed per pound, so no matter what the volume is it will contain the same BTU's per pound).
Perhaps there's much more "heat" (BTU's/Joules) available in a tub/pool to "refill" the spot which transferred into ones body (perhaps through conductivity) not allowing the area of water touched by ones body in the tub/pool to cool off fast enough.
Another possible factor might be that perhaps there's an increase in convection; not sure if the higher the volume of water, the higher the convection.
| The difference in temperature perception between a 104°F pool/tub and a pot/cup of water at the same temperature is likely due to physical phenomenon like thermal mass, convection, heat transfer rate and sensing area (finger vs whole body).
The larger volume of water in a pool or tub means a higher thermal mass that help to maintain the temperature of the water. Also, the surface area of the water in contact with your skin is much larger in a pool or tub than it would be in a cup or pot, allowing for more heat transfer to occur.
Convection is the transfer of heat by the movement of a fluid or gas. In the case of a pool or tub, convection occurs when the warmer water at the surface sinks and is replaced by cooler water from the bottom. This creates a circulating pattern, where warm water is continually brought to the surface and cooled water sinks to the bottom, promoting the transfer of heat throughout the water. This process increases the heat transfer rate to the skin, hence making you feel hot.
In a pot or cup, convection is less likely to occur because the water is confined in a small space, so the heat transfer rate is slower.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/745214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
The definition of "total curvature" for a scalar field In Modern Electrodynamics, Zangwill remarks that the total curvature vanishes at every point where $\nabla^2 \varphi = 0$.
Now my question(s): how is "total curvature" defined for a scalar field (is it, perhaps, the "function" of $\hat{\bf{n}}$ for every unit vector $\hat{\bf{n}}$ which gives $(\hat{\bf{n}} \cdot \nabla)(\hat{\bf{n}} \cdot \nabla) \varphi$)? And how does the vanishing of the Laplacian at that point imply whatever the vanishing of whatever the definition of total curvature is?
| I think Zangwill is just being heuristic with the terminology here, and is using “curvature” to mean “second (partial) derivatives of the function”. He is interpreting each of the quantities $\frac{\partial^2\phi}{\partial x^2}, \frac{\partial^2\phi}{\partial y^2} , \frac{\partial^2\phi}{\partial z^2} $ as representing “curvatures in the $x,y,z$ directions”, and their sum as the “total curvature”. So, it seems to me he is making an essentially tautological point that the sum of the second partials vanishes at every point where the Laplacian (which by definition is the sum of the second-order $x,y,z$ partials) vanishes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/745313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is the Laplacian in the Pauli Equation? The Pauli Equation is given by
$$\left[\frac{1}{2m}\left[({\bf\hat{p}}-q {\bf A})^2-q\hbar{\bf\sigma}\cdot {\bf B}]+q\phi\right]\right]|\psi\rangle=i\hbar\frac{\partial}{\partial t}|\psi\rangle.$$
This contains a component ${\bf\hat{p}}^2 |\psi\rangle$.
However, according to Wikipedia:
The state of the system, |ψ⟩ (written in Dirac notation), can be considered as a two-component spinor wavefunction
So $|\psi\rangle$ is a vector with two elements, as a function of time. Therefore, I think that $\bf{\hat{p}}$ can be viewed as a $2\times2$ matrix. What are the matrix elements?
If it is just $-i\hbar$ times the identity matrix, why is the equation not simplified to
$$\left[\frac{1}{2m}\left[(-i\hbar\bf{I}-q\bf{A})^2-q\bar{h}\bf{\sigma}\cdot \bf{B}]+q\phi\right]\right]|\psi\rangle=i\hbar\frac{\delta}{\delta t}|\psi\rangle,$$
with $\bf{I}$ the identity?
If it is $-i\hbar\nabla$ times the identity matrix, then how can we take the laplacian of an element of the two-spinor? After all, each element is just a complex number.
| As mentioned in the comments, it is understood that $p^2$ stands for $p^2 \otimes \mathbb{I}$. Since $p^2 = - \hbar^2 \nabla^2$, this means you just apply $p^2$ to each component of the spinor separately. In other words, the Laplacian is just applied to each entry separately.
Furthermore, don't forget that each component of the spinor is a function. The state of the system is a two-component spinor wavefunction. This means it is composed of two wavefunctions stacked on a spinor.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/745481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to find the corresponding energy given the wave function? So I was struggling doing the following question:
Given the wave function $\psi(x) = A\, \mathrm{e}^{-ax^2} $ with potential $V = \frac12 kx^2$, find the corresponding total energy in terms of $k$ and $m$.
I did the calculation for $\left<x^2\right>$ and $\left<p^2\right>$ but it turns out to be expressions including $a$. How can I express $a$ in terms of $k$ and $m$?
| If you're told that $\psi(x)$ is an energy eigenstate, then it must be the case that
$$
- \frac{\hbar^2}{2m} \frac{d^2 \psi}{d x^2} + \frac{1}{2} k x^2 \psi = E \psi
$$
for some value of $E$. But if you actually take the derivatives on the left-hand side, you'll find that there is one particular value of $a$ for which this is actually true; it will be the value of $a$ that ensures that the right-hand side is just a multiple of $\psi$ itself. This requirement determines $a$ in terms of $k$ and $m$ (and other constants.)
However, if you don't know that $\psi(x)$ is an energy eigenstate, then it's not possible to determine $a$. Any wavefunction $\psi(x)$ can be written as a superposition of the energy eigenstates of the system, including $e^{-ax^2}$ for an arbitrary value of $a$. You can still calculate the expectation value for the Hamiltonian in this state, $\langle \psi | H | \psi \rangle$; but the result will be a function of $a$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/745599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Atmospheric pressure in non-nertial frame? Any object kept in an accelerating container of water feels different pressure than unaccelerated. Because if we go into the frame of water the g effective changes. Since air is also a fluid, a container of liquid accelerating upwards should experience more atmospheric pressure than it feels at rest, but intuitively it does not feel so. Am I correct in assuming that liquid feels more atmospheric pressure? (Quantitatively, $P×(g+a)/g$)
| This question may be based on an expression for pressure in a column in hydrostatic equilibrium, like $P = \rho g h$ (or its integral generalisation for variable density $P = \int_{z_0}^{\infty} \rho g dz$). It's important to bear in mind the assumptions underlying these expressions. Your equation for pressure would hold subject to the following:
*
*Your liquid is subject to acceleration $a$ upward.
*A column of atmosphere above that liquid accelerates upward at the same rate.
*There is a barrier which prevents pressure inside the column from equilibrating with the atmosphere which is at rest outside the column.
This seems like a rather contrived case, which may be why the kind of dependence of $P$ on $a$ you're talking about seems counterintuitive.
To look at the more realistic case where the atmosphere is not in uniform motion we need to solve the Euler equations. The one for pressure can be stated:
$$ \frac{D \mathbf{u}}{D t} - \mathbf{g} = \frac{\nabla p}{\rho}$$
This shows how pressure gradient $\nabla p$ is related to gravitational field g and (Lagrangian) acceleration $\frac{D \textbf{u}}{D t}$ of a fluid parcel.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/745711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is Astatine-210 (At-210) the longest-lived isotope of astatine despite possessing an odd number of neutrons? I am guessing that isotopes with an even number of neutrons more readily release an alpha particle... When and if At-210 does that, it still has the problem of being 'odd/odd'...
But this begs the question... Why can't highly unstable isotopes like this just emit a neutron? Rather than 'waiting' for beta decay (or electron capture) to occur?
Why isn't the radioactive emission of a single neutron or proton, via quantum tunneling perhaps, as common as alpha decay (via quantum tunneling)?
| Leaving aside the questions of why only certain decay options are observed, lets just compare the known decays of At-210 and your proposed neutron emission.
Going to the latest Atomic Mass Evaluation (2020 version, part II with the tables of masses) one can look up the masses of various nuclei as well as their decay products. One finds:
Nuclei
Mass (amu)
Delta (amu)
At-210
208.986169
0
At-209 + n
209.9948339
+0.007686
Po-210 (ec)
209.9828737
-0.004273
Bi-206 + $\alpha$
209.9811023
-0.006044
The deltas show that At-210 cannot decay to At-209 plus a neutron - the total mass of the products increases, meaning it is not energetically possible. The other decay paths are exothermic. Further, the alpha decay is more energetically favorable, that is the total mass after alpha decay is less than for beta decay.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/745863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does Bragg's law consider only specular scattering for constructive interference? My textbook only considers two-dimensional scattering, so I will stick to that. When explaining Bragg's law, it states that the incidence angle and the scattering angle must be equal so that all trajectories from different atoms (from the same row) to a point $P$ in a screen are the same. Then, it says that between different rows separated by a distance $d$, the equation $2d\sin\theta=m\lambda$ must also be satisficed for constructive interference to occur.
What I do not understand is why do the incidence and scattering angles must be equal? Isn't it possible for $\theta_i$ and $\theta_s$ to be different, and account for the difference in trajectories so that it differs in an integer number of wavelenghts? It will end up as a system of equations somehow a bit more complicated than the expression from above, but I find it completely possible.
| If you have a uniformly spaced linear array of $M$ identical omni-directional emitters such that they have equal amplitudes say $1$ but the phases are in an arithmetic series, that is the $m^{th}$ has phase $m\alpha$ then the sum of all at a large distance such that $R>>Md$ and in direction $\theta$, will be ($\kappa = 2\pi d/\lambda$ and $d$ is the spacing):
$$A(\theta) \propto e^{-\mathfrak j \kappa R} \sum_{m=0}^{M-1} e^{\mathfrak j m(\kappa d\rm{ cos}(\theta) - \alpha)}.$$
This is a geometric series that sums to
$$A(\theta) \propto e^{-\mathfrak j \kappa R} \frac {e^{\mathfrak j M(\kappa d\rm{ cos}(\theta) - \alpha)}-1}{e^{\mathfrak j (\kappa d\rm{ cos}(\theta) - \alpha)}-1}.$$
This obviously has the peak in the direction $\hat \theta $ where
$\kappa d\rm{cos}(\hat \theta) = \alpha$ in which case they all add up coherently resulting in $|A(\hat \theta )| \propto M$ Now from symmetry consideration this would be the same angle from the other side exciting the array with a plane wave resulting the same phase steps exciting the individual emitters in an arithmetic series.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/745974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why is $( \alpha_i r_i) (\alpha_j r_j ) = \frac{1}{2} \{ \alpha_i , \alpha_j\}r_i r_j$? Where $\alpha_i= \left(
\begin{matrix}
0 & \sigma_i \\
\sigma_i & 0
\end{matrix}
\right)$.
To me it should just be $( \alpha_i r_i) (\alpha_j r_j ) = \alpha_i \alpha_j r_i r_j$, but it is not. Why the difference?
| Without loss of generality,
$$
\alpha_i \alpha_j = \frac{1}{2} \left( [\alpha_i, \alpha_j] + \{\alpha_i, \alpha_j\}\right)
$$
Note the first term is antisymmetric under interchange of $i$ and $j$, and the second term is symmetric.
Second, note that $r_i r_j$ is symmetric under interchange of $i$ and $j$. Therefore, $[\alpha_i, \alpha_j] r_i r_j = 0$.
This follows since the trace of a product of an antisymmetric matrix and a symmetric matrix is zero. If $A=-A^T$, and $S=S^T$, then ${\rm tr}(AS) = {\rm tr} ((AS)^T) = {\rm tr}(S^T A^T) = {\rm tr}(A^T S^T) = -{\rm tr}(A S)$, and therefore ${\rm tr}(AS)=0$. In this chain of equations I've used ${\rm tr}(AB)={\rm tr}(BA)$, ${\rm tr}(A)={\rm tr}(A^T)$, $(AB)^T=B^TA^T$, and ${\rm tr}(-A) = - {\rm tr}(A)$.
Combining the above facts, we conclude that
$$
\alpha_i \alpha_j r^i r^j = \frac{1}{2} [\alpha_i, \alpha_j] r^i r^j + \frac{1}{2}\left\{\alpha_i, \alpha_j\right\} r^i r^j = \frac{1}{2}\left\{\alpha_i, \alpha_j\right\} r^i r^j
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/746070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Distinction between "types of heat" in thermal efficiency The definition of thermal efficiency I see in several sources is "total work" divided by "heat input".
Wikipedia, for example, says: "For a heat engine, thermal efficiency is the ratio of the net work output to the heat input".
I don't understand this definition. Net work is a perfectly valid concept, always given by $\oint pdV$. For a Carnot cycle, heat presents no problem, because the adiabatic processes involve no heat, while the isothermic processes are easily identified as consuming heat or producing heat.
However, for a general cycle this distinction between "heat input" and "heat output" is not clear. Just imagine a generic cycle in a $p-V$ diagram. How I am supposed to know which bits of the cycle are "heat input" and which are "heat output"?
| First of all, $\oint pdV$ is the net work only if $p$ equals the external pressure. If the process is not reversible then the internal pressure is not equal the external pressure. So assume that the process is reversible.
In that case, in an arbitrary reversible cycle during which the transported entropy between the system (engine) and its surroundings at temperature $T$ is denoted by $dS$ then $\oint pdV = \oint TdS$. If $dS>0$ then we call the entropy absorbed by the engine and if $dS<0$ it is expelled (rejected). You may call $Q_{abs}=\oint_{dS>0} TdS$, and $Q_{rej}=-\oint_{dS<0} TdS$ and define the cycle efficiency as the ratio
$$\eta = \frac{\oint pdV}{\oint_{dS>0} TdS}=1+\frac{\oint_{dS<0} TdS}{\oint_{dS>0} TdS}$$
For a Carnot cycle, there are two isothermal stages $\int TdS = T_h (S_2-S_1)$ and $\int TdS = T_\ell (S_4-S_3)$ where $1,2,3,4$ refer to the stages at which the cycle changes from isothermal (1-2) to adiabatic (2-3) to isothermal (3-4) - adiabatic (4-1). Therefore $S_2=S_3$ and $S_1=S_4$, and $\Delta S= S_2-S_1=S_3-S_4$ will be the absorbed and rejected entropy resulting $\oint pdV = \oint TdS = (T_h-T_{\ell})\Delta S$, and thus $\eta = 1-\frac{T_{\ell}}{T_h}$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/746242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Why is the common envelope ejected in some accretor-donor systems? As an example, let us consider a binary system of a neutron star and an evolved star (e.g. red giant) that has expanded, filled its roche lobe, and started the mass transfer onto the neutron star.
Under certain conditions such mass transfer can become a runaway process, the accretion onto the neutron star is engulfed, and the matter that leaves the outer layers of the donor quickly surround the binary system and the two objects find themselves orbiting around each other in a dense gaseous environment. Such phase is generally referred to as Common Envelope phase.
In many papers I have read the sentence:
The common envelope phase ends either with the ejection of the common envelope or by the merger of the two systems still inside the envelope.
I understand the latter possibility, that is that the dynamical interaction of the two objects in a dense environment would lead the system to the lost of orbital energy, the shrinking of their orbital separation, and due to further loss through gravitational wave emission, to merging.
What puzzles me is: why, and how is it possible that the common envelope is instead being ejected?
| The envelope of the larger star may be rather weakly bound. Roughly speaking, the gravitational binding energy of the envelope is $-GMm/r$, where $M$ is the mass interior to the envelope, $m$ is the mass of the envelope and $r$ is its characteristic radius.
When material is accreted onto the neutron star, then a fraction of the kinetic energy on impact with the surface will be released as radiation - roughly $Gm_{\rm NS} m_{\rm acc}/2r_{\rm NS}$, where $m_{\rm NS}$ and $r_{\rm NS}$ are the mass and radius of the neutron star and $m_{\rm acc}$ is the accreted mass - and absorbed in the envelope.
The envelope could be ejected if the radiated energy exceeds the (modulus of) the envelope binding energy. This could happen because even though $m_{\rm acc}$ could be small compared with the mass of the engulfing star, $r_{\rm NS}\ll r$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/746320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Is it possible to statically generate lift with the difference in pressure like wings? If I understood it correctly, the shape of the wings and/or propellers generates lift/thrust with the difference in pressure in both sides of the wings/propellers; where the lower side has higher pressure airflow and the uper side has low pressure airflow.
With this in mind, I was wondering if it is possible to generate an area of low pressure around the upper part of the an aircraft without the moving balloons, wings or propellers/rotors.
A "static lift" is the best way I could put it.
So, would such thing be possible? Or lift would only be achieved with the airflow that wings already work around?
| Wait, you want to move something upward against the pull of gravity by differentially lowering the pressure on its upper surface?
Uh, have you considered sticking a straw in a milkshake and sucking the shake into your mouth?
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/746531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 4
} |
Equilibrium of Electromagnetic Force Between Two Moving Charges Please refer to the 2015 discussion titled "Magnetic Force Between Two Charged Particles", where a couple of the commenters present the generally-accepted equation for the magnetic force between moving charged particles. Magnetic force between two charged particles?.
This result appears to be problematic, because unlike the predicted electric forces the magnetic forces are not aligned with the vector between the locations of the two charges. As a result, the combined electric and magnetic forces on the system do not appear to conserve momentum. The sums of the electric and magnetic forces on each charge would have to be equal, opposite, and colinear in order to conserve both linear and angular momentum.
The two-charge interaction should underlie all of electromagnetic theory, shouldn't it?
What am I missing?
|
As a result, the combined electric and magnetic forces on the system do not appear to conserve momentum.
That is correct. The EM force does not conserve momentum of the interacting charged particles. The EM field itself contains momentum which must be accounted for to get the conservation of momentum of the total system including both particles and fields.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/746641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is effective mass used in calculating kinetic energy of electron in semiconductor? Is effective mass used in calculating kinetic energy of electron in semiconductor? I recall it was just used to take into account the internal forces so that expression of force fits well. But why
$E_k= \frac{\hbar^2k^2}{2m_e^*}$ uses $m_e^*$ as effective mass.
| Effective mass is not the real mass of electrons. For a free electron we don't need to consider effective mass, here electron's real mass can easily describe things.
But for electrons in solids (such as in semiconductor), it is easier to work with effective mass, because it enables one to proceed as if the electron was essentially free, so same mathematical procedures as used for free electron can be used.
That is why, as you wrote, kinetic energy of the electron can be expressed in terms of its effective mass, $E_k = \frac{\hbar^2k^2}{2m^*}.$
Effective mass can be positive, zero and even negative.
This effective mass is actually a tensor, with the tensor components given by:
\begin{equation}
\left(\frac{1}{m^*}\right)_{ij}=\frac{1}{\hbar^2}\frac{\partial^2E}{\partial k_i\partial k_j}.
\end{equation}
You can see, the value of effective mass depends on the dispersion relation $(E$ vs $k)$ of the electron in the solid. That is, how energy of the electron is dependent on the wave vector.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/746755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does $\exp(-i \theta \sigma_m \otimes \sigma_n)$ represent a rotation operator? It is well known that $\exp(-i \sigma_k \theta)$ where $\sigma_k$ $(k=x,y,z)$ is a Pauli matrix, represents the rotation operator about $k$-th axis. What physical interpretation does $\exp(-i \theta \sigma_m \otimes \sigma_n)$ have, where $\otimes$ is the tensor product?
| As emerged from comments on your question, the rotation is better to be intended as a geometrical concept, rather than physical.
The word itself, rotation, helps to understand a geometrical concept when this is isomorphic to a 3D space. But when it is not such a case, its meaning is preserved, even if you lose the ability to visualize it in your mind.
I believe that when dealing with quantum mechanics, it is particularly useful to go beyond visual reasoning. Especially when considering scenarios like the one you are asking, since, depending on $\theta$, the operator is able to entagle a two-dimensional Hilbert state.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/746975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Past and Future I'm new to physics
we've had and argument in our class about:
we know that present (and/or past) can and will affect future.
But how do we know if the future can affect past or present?
Is that even possible? What principles are in effect here?
for example: if I throw a piece of paper in trash can, how it can affect my grand grand grand grand (...) parent?
| When you study special relativity, there's a concept called "causality". It explains how different events can be related with each other. This is, of course, due to the speed of light being constant and only achievable with electromagenitc waves.
Moreover, we have to consider the light cone (see figure below). It arises from the relation between two different events in space. There are time, space and light type of events (or so I've been taught). For example, a type time event can ocurr at the same place but never at the same time. Also it is fascinating that, when talking about space events (let's say we have two events, E1 and E2) , if E1 happens after E2 for some inertial system of reference, then for another SRI could see how E1 happens before E2.
Now, if we take a look at the light cone, you can see that the surface of the cone is the boundary of any event that has happened "inside" the light cone. Notice too that the center should be a given event, let's say E0. Every event inside the light cone will be able to be related with E0 by a cause-effect relationship, but if it is outside the light cone, then, because the surface represents the travel of information at the speed of light, it can't be related with E0 since, to do so, information would have to travel faster that the speed of light, which we assumed that is impossible.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/747088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why is the current the same after passing through a resistor even when the drift velocity goes down? A resistor converts some of the electrical energy into heat energy, implying that the energy goes down, implying that the force with which an electron moves, and consequently, the drift velocity goes down.
Now, I=naeV where V is the drift velocity, so shouldn't the current go down after an electron has passed through a resistor?
I am familiar with the pipe-water-flow analogy, but my issue with that is it just involves water flowing, not the loss of any energy from that water.
Where am I going wrong?
This is with reference to steady state.
|
A resistor converts some of the electrical energy into heat energy,
implying that the energy goes down,
Energy of the charge does not go down in the resistor. That's because electrons alternatively gain kinetic energy from the source of the electric field, while simultaneously giving up an equal amount of kinetic energy when colliding with the particles of the resistor, that is eventually dissipated as heat.
Now, I=naeV where V is the drift velocity, so shouldn't the current
go down after an electron has passed through a resistor?
No. current must be constant in a given resistor or series resistance circuit for conservation of charge. What can vary is the drift velocity $V$ if the cross sectional area $a$ varies in the resistor. In order for the number of charges crossing an area to be constant (constant current), the rate at which charges cross the area (drift velocity) must increase for smaller areas and decrease for larger areas.
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/747275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 1
} |
How did we get the formula $d U = nCvdT$? Our teacher taught us that for any thermodynamic process, dU=nCvdT where Cv is molar specific heat capacity at constant volume and dU is change in internal energy. How did we get this formula and why is it valid for all processes
| It is not valid for all process. In case of single uniform system, it is valid for constant volume processes. It is also always valid in case of ideal gases (even for processes which change volume).
Heat capacity at constant volume is the amount of heat that needs to be added to the system to increase its temperature by unit degree. Mathematically,
$$
C_V(T) = \lim_{V=const., \Delta T\to 0} \frac{\Delta Q}{\Delta T}.
$$
or
$$
C_V(T) = \frac{dQ}{dT}.
$$
Specific heat capacity $c_V$ is heat capacity per one mole, so we have
$$
C_V = n c_V,
$$
where $n$ is number of moles in the system.
If volume is kept constant, internal energy $U$ cannot change by accepting mechanical work, so it only can change via adding heat, so we have $\Delta U = \Delta Q$.
Combining all these facts, we get
$$
dU = C_V dT = nc_vdT.
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/747357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
Why does ice on my car's windshield melt when it's below zero degrees Fahrenheit? This morning I looked outside at my car and windows were covered in ice. The temperature this morning was -7F. A couple of hours later now the temperature is 0F, yet everything on the car is shiny and clear.
What happened to the ice? It just disappeared.
Edit: The car was in the sun.
| If the car was in the sun, the paint, glass and metal of the car will absorb energy from the sunlight and warm up slightly. the frost will then melt, but only on the sunny side of the car.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/748022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is the rate with which an object gains speed when it is dropped dependent? Is the rate with which an object gains speed when it is dropped dependent or independent of the object's weight?
(gravitational force)
| It is usually independent of the object's weight.
Weight is given by $W = mg$, while Newton's second law says $F = ma$. Equating the two, the mass cancels out, and $a = g$. So the acceleration is usually independent of the object's weight.
Of course, there are possible subtle effects (like air resistance) that can change this conclusion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/748108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Landau vibration of molecules angular momentum I'm going through Landau's Mechanics, and I'm a bit confused as to how he's eliminating the total angular momentum. First of all, why can't we simply let
$$\textbf{M}=\sum m_a\textbf{r}_a\times\textbf{v}_a=const$$
which is how I think he eliminated the translational motion.
Secondly, he says that
Since the angular momentum is not the total time derivative of a function of coordinates, the condition that it is zero cannot in general be expressed by saying some such function is zero
I'm not sure what is the significance of this. The only thing of note I remember about total time derivatives is that functions that are a total time derivative in the Lagrangian are conserved.
Thirdly, he says the condition is equivalent to
$$\sum m_a\textbf{r}_{a_0}\times\textbf{u}_a=0$$
where
$$\textbf{M}=\frac{d}{dt}\sum m_a\textbf{r}_{a_0}\times\textbf{u}_a$$
Since this is the total time derivative form, couldn't it just be equal to a constant? Is it equal to zero out of simplicity?
| This is done in order to solve the problem using a method analogously to requiring the centre of motion to be on the origin. Just as with the centre of motion, the mathematical conclusion goes as far as to say that it is constant, but it is more convenient to let it equal zero.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/749059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Short circuit doubt Short circuits occur when a live wire comes in contact with a neutral wire due to poor insulation and stuff, and the main gist of this word short circuit is the fact that the wire gets heated up so fast it might burn. So if im getting this right then that would only mean that a very high current flows through the wire (because of joule's equation?) but my question is how? When a live and a neutral wire comes in contact then how is there a potential difference to begin with (aren't they at the same potential now?), how can current flow without voltage?
| In the case of AC current, the concept of difference of potential is not quite appropriate for that extreme cases. If we could measure 2 nearby points of the (short circuited) circuit, including along the windings inside the transformer next to your home, a voltmeter would show zero, if the resistance between points is negligible. Nevertheless there is an emf resulting from the variation of the magnetic flux in the transformer.
Even with zero ohmic resistance, the AC current is limited due to the transformer reactance.
But real wires have always some resistance and the maximum current can be huge enough to heat until break very quickly the weakest part of the circuit
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/749545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Why is the flat spacetime of special relativity not a linear vector space? Why is the flat spacetime of special relativity, not a real linear vector space? It seems to satisfy all the axioms for a set to form a vector space. I mean adding to spacetime points $(t_1,\vec{r}_1)$ and $(t_2,\vec{r}_2)$, is yet another spacetime point $(t_1+t_2,\vec{r}_1+\vec{r}_2)$. So I don't see a problem with closure, or vector addition being commutative or associative. The null vector is the spacetime point $(0,\vec{0})$. I also hope that it is also closed under scalar multiplication.
| Flat spacetime is more naturally described as an affine space, not a vector space. An affine space is basically a vector space without an origin. There is no unique natural event in flat spacetime which is naturally distinguished as "the" origin.
If we neglect curvature and take two events in spacetime say A is the supernova SN 2003fg and B is the supernova SN 2006gy then without choosing a third event and designating it as an origin, what event in spacetime is A+B? Indeed, it doesn't make sense to add A and B, any more than it makes sense to add Paris to Caracas.
In contrast, without choosing an origin we can subtract B-A to get a vector. This is what an affine space does.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/749814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why can't I use $\tan()$ to find the force of tension within a rope that forms a triangle? Apologies if this is too simple, but my teachers could not give me an answer. The following is a question from my grade 12 physics homework:
Now, the solution for $T_2$ is simple:
$T_{1_x} =T_{2_x}$
$96 cos(60) = T_2cos(30)$
$T_2 = 96 cos(60) / cos(30)$
$T_2 = 55 N $
This is all logical, but this is a right angle triangle so the following should also work:
$tan(60) = T_2 / 96$
$96 tan(60) = T_2$
$T_2 = 166 N $
It does not. 55N is the correct answer. So, why can I not use tan() to find the value to $T_2$?
| Your intuition is correct, but you are using the wrong angle.
$$\tan(30°) = \frac{T_2}{96\ \mbox{N}} \Rightarrow T_2 \simeq 55\ \mbox{N}$$
Or, alternatively, you are using the wrong definition of tangent:
$$\tan(60°) = \frac{96\ \mbox{N}}{T_2} \Rightarrow T_2 \simeq 55\ \mbox{N}$$
I was too quick in my previous answer. I arrogantly assumed that this was a simple trigonometry mistake and I tried changing the angle on the calculator, which gave me the right result on the first try, further fueling my belief.
However, as noted by the OP, the definition of tangent is correct: opposite/adjacent. And yet here the opposite works.
Let's restart from this step in your calculation:
$$T_2 = 96\ \mbox{N}\ \frac{\cos(60°)}{\cos(30°)}$$
Now, if we remember that $\cos(60°) = \sin(30°)$, we get that $T_2 = 96\ \mbox{N} \tan(30°)$. Alternatively, we could use $\cos(30°) = \sin(60°)$ to obtain $T_2 = \frac{96\ \mbox{N}}{\tan(60°)}$.
This justifies the two equations I wrote above, but why is there an apparent conflict with the geometry of the problem?
A first hint comes from the "length" of the sides. Notice how the short side is $96\ \mbox{N}$ "long", while the long side is only $55\ \mbox{N}$ "long"!
This tells you that the geometry you see in the picture is valid to calculate the length of the wires holding what I assume is a frame, but it does not apply to the tensions. If you actually draw the tensions to scale, you will notice that the triangle they form has a vertical hypotenuse, instead of a horizontal one. That's because the resulting force you get by summing the two tensions has to be purely vertical to balance the weight of the frame.
Sorry for the inaccurate answer before.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/750271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
What portion of the universe is black holes? What portion of the universe is black holes? Is it possible to estimate the percent of all mass that is in the black holes?
| The cosmic inventory of the mass of various types of object is discussed in a well-known review by Fukugita & Peebles (2004).
They estimate that the fraction of the total matter in the universe that is made up of stellar-mass black holes (the final states of massive stars) is about 0.00025 with about a 30% uncertainty. A further fraction of about $10^{-5}$ is in the form of supermassive black holes at the centres of galaxies.
NB. These fractions are of the total (including dark matter). If you want the fraction of "normal", baryonic mass, them multiply these fractions by about 4.5.
As another answer correctly points out, a candidate for dark matter is primordial black holes, formed in the very early universe. Since dark matter makes up about 82% of the matter in the universe and no dark matter candidates have yet been identified, then it is possible that primordial black holes make up 82% of the matter in the universe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/750434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
How does the magnetic field strength (in Teslas) change when two cylindrical magnets are pulled appart? I have two cylindrical magnets aligned such that the opposite poles are facing each other (N-S N-S).
I am trying to find a mathematical relationship that models the change in the magnetic field strength (B - measured in Tesla) at the midpoint of the two magnets when they are pulled apart (with a distance between them denoted r).
I have found many seemingly conflicting resources that say the relation is one of the following:
*
*B=1/r.
*B=1/r^2.
*B=1/r^3.
I am very unsure of which is applicable to my scenario. I should note that I have a high school level understanding of magnetism so I struggle to understand some of the more complex explanations. I would appreciate it very much if someone could provide some insight into this.
| The B field changes as the magnets pull apart. All the answers below are approximate'
*
*When they are still close together they behave like infinite charged planes, so B does not change with distance.
*As they move further apart, the near end of each magnet looks like a point charge, with the far ends of the magnets being negligible. Then the field falls like 1/r^2.
*As they move still further apart, they begin to look like two magnetic dipoles with B falling like 1/r^3.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/750809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Commutator of two Lorentz charges/angular momenta In Barton Zwieback's book "A first course in string theory" page 261, we calculated a Lorentz charge/angular momentum $M^{-I}$ of the open bosonic string in the light-cone formulation to be;
$$
M^{-I} = x_{0}^{-}p^{I} - \frac{1}{4\alpha^{'}p^+}\left(x_{0}^I\left(L_{0}^\perp + a\right) + \left(L_{0}^\perp + a\right) x_{0}^I\right)\\
- \frac{i}{\sqrt{2\alpha^{'}}p^+}\sum_{n\ge1}\frac{1}{n}\left(L_{-n}^{\perp}\alpha_{n}^I - \alpha_{-n}^IL_{n}^{\perp}\right).\tag{12.151}$$
Then, he said that that calculating the commutator $[M^{-I}, M^{-J}]$ is very long and not easy. he writes then;
$$
[M^{-I}, M^{-J}] = - \frac{1}{\alpha^{'}p^{+2}}\sum_{m\ge1} \left(\alpha_{-m}^{I}\alpha_{m}^{I} - \alpha_{-m}^{J}\alpha_{-m}^{J}\right)\\ \times \left\{m \left[1 -\frac{1}{24} (D-2)\right] + \frac{1}{m}\left[\frac{1}{24} (D-2) + a\right]\right\}.\tag{12.152}
$$
My question is, how can someone prove such result? I tried to find a paper or something on internet, but no progress till now.
| Briefly speaking Ref. 2 argues on general grounds an ansatz (2.3.24) for the angular momentum commutator, and then sandwiches$^1$ the commutator between a bra and ket vacuum state to simplify the calculation of the coefficients (2.3.35) in the ansatz.
References:
*
*B. Zwiebach, A first course in String Theory, 2nd edition, 2009; section 12.5, eq. (12.152).
*M.B. Green, J.H. Schwarz and E. Witten, Superstring theory, Vol. 1, 1986; section 2.3, eqs. (2.3.24+35).
--
$^1$ A similar sandwich trick is often applied to simplify the calculation of the central charge in the Virasoro algebra, cf. e.g. eq. (2.2.31) in Ref. 2.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/751020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does magnetic force only act on moving charges? I don't understand why the magnetic force only acts on moving charges. When I have a permanent magnet and place another magnet inside its field, they clearly act as forces onto one another with them both being stationary. Also, I am clearly misunderstanding something.
| At an effective classical level, the atoms in permanent magnets do contain moving electric charges at the microscopic level: the orbiting electrons. These moving charges correspond to microscopic electric currents, and the magnetic fields act on these microscopic currents.
This picture is certainly a simplification of the underlying quantum effects, but I think it's accurate enough to clear up your confusion.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/751358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 7,
"answer_id": 5
} |
On Bohr's response to EPR If I understand correctly, the EPR paper (1935) points out that quantum mechanics is incomplete theory if it describes individual particles and measurements. This is true by the mathematical formalism. But already in 1926 quantum mechanics had its statistical interpretation, and in 1930 Heisenberg in his Chicago Lectures admits that position and momentum can be known exactly. So why didn't Bohr just give a short reply: $$\text{"It's a statistical theory."}$$
| It is possible that Bohr didn't say quantum mechanics is a statistical theory because that claim is false. The square amplitudes of quantum states don't always obey the rules of probability, e.g. - during quantum interference experiments in general break those rules, see Section 2 of
https://arxiv.org/abs/math/9911150
Also, quantum mechanics does provide a complete description of individual quantum systems in terms of their Heisenberg picture observables:
https://arxiv.org/abs/quant-ph/9906007
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/751512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Relationship Between Velocity at Lowest Position and Gravitational Acceleration in Pendulum Motion To my knowledge there are two methods of finding this relationship. One with the centripetal force and the other with conservation of energy. I've left my work in the image below.
The problem here is that both of them give the complete opposite results. My question is, which of the two are correct and why is the other one wrong?
Any and all answers are appreciated, thanks.
| Sorry, initially I did not understand what you were trying to do. The issue is that $T$ is not a constant. So $T$ is increasing as well, it increases with $g$ by more than the $-mg$ decreases, leading to a net increase under the radical and overall. If you think about it, this makes sense because $T$ includes both gravity and centripetal force.
In order to use the first method to find $v$, you would have to substitute $T$ out of the equation from elsewhere. This is not a convenient way to solve the problem.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/751915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is there anything truly "stationary" in the universe? Ok, so I read this question and it got me thinking about something. Is there anything genuinely stationary in our universe? What does it mean to be stationary or devoid of any motion? If there isn't anything stationary, can there be a time when a thing is stationary and devoid of any motion in the future? Is a reference point always needed to classify a particular thing as stationary? I may be sitting right now, not making the slightest movement, but that does not mean I am not in motion. I am in motion, in reference to the earth, the solar system and the milky way galaxy
Also, what would happen if, say, a "stationary" object was present in our universe? What would be the conditions required for this anomaly?
P.S. I have taken a look at this question too, but it doesn't completely answer the particular question I am asking, hence this question
| Your question is like asking whether there is a particular point that is the centre of the surface of the Earth. The answer is that the surface of a sphere has no unique centre, so it is meaningless to speculate about where it might be. At any point on its surface, you can consider yourself to be centred on a sphere in the sense that the surface is spread around you equally in all directions. But all points on the surface are the same in that sense- none of them is the absolute centre. Likewise, there is no frame of reference in the universe that is absolutely stationary.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/752328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Why the long lived Kaon can not decay into two pions? The short-lived and long-lived states of kaon $|K_1>$ and $|K_2>$ respectively have the following compositions if they are the eigen states of CP parity:
$|K_1> = \frac{|K^0>\:-\:|\bar{K^0}>}{\sqrt2}$
$|K_2> = \frac{|K^0>\:+\:|\bar{K^0}>}{\sqrt2}$
In the book "Introduction to elementary particles" by David Griffiths, in the section 'Symmetries' for Neutral kaons it is given that $|K_1>$ can decay to two pions under CP symmetry and with right combination of orbital angular momenta for three pions system of $\pi^0 \pi^+ \pi^-$ which has CP parity = $+1$, $|K_1>$ can decay in to three pions. But $|K_2>$ can never be decay to two pions.
But wouldn't it be possible to use the same argument as three pions case for two pions with right combination of orbital angular momenta, which is antisymmetric under parity and is antisymmetric in isospin part be in total a symmetric wave function which gives CP parity $-1$. Then isn't it possible for $|K_2>$ to deacy to two pions under CP parity?
Experimentally it is observed that it's possible for 2 pion decay, that is the CP violation, but why theoretically it is not allowed?
|
But wouldn't it be possible to use the same argument as three pions case for two pions with right combination of orbital angular momenta,
The argument is not about orbital angular momentum, but of the angular momentum any two particles have by definition of angular momentum as
The angular momentum of a particle of mass m with respect to a chosen origin is given by
$L=rXp$ where r is the distance from the chosen origin and p the momentum vector.
In the case of Kaon pi decays the origin is considered the Kaon, but there is no orbital angular momentum defined for particles that are not bound . The Pions coming out of the decay of Kaons are not bound to each other, only correlated by quantum numbers and conservation laws. So your use of ad hoc orbital angular momentum is not correct.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/752454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Horizontal force of swinging beam In the diagram, a weighted beam is hinged to a vertical wall and is swinging downward.
As shown in the picture, when the beam is perpendicular to the wall, the horizontal force by the hinge is to the left acting as a centripetal force.
I am curious about the direction and magnitude of the horizontal force as the beam falls. I know that when the beam is just beginning to fall from a near vertically upwards position, the horizontal force has to point to the right because the center of mass's x component is accelerating to the right. However, I am not when the horizontal force flips to the left from this time to when the beam swings to a horizontal position as in the picture.
| You are right that the force is to the right at first. The center of mass starts at the wall. As the beam starts to fall, it starts to rotate. The center of mass starts to move down and away from the wall.
The total horizontal force is the horizontal component of the reaction force that keeps the end of the beam still. The reaction force pushes to the right. The center of mass acquires a velocity with a component to the right.
As the beam nears horizontal, the horizontal component of velocity of the center of mass decelerates toward $0$. The reaction of the hinge pulls back to the left.
Consider a similar beam toppling on a frictionless table. There is no horizontal component of force. The center of mass would drop straight down. The bottom end would slip left.
In this problem, the end cannot slip left. The reaction force of the hinge pushes it right.
Consider another beam that starts to topple on a table. This time it starts with friction, so the beam acquires a velocity to the right. And then friction disappears. The center of mass would keep the rightward component of velocity. The bottom end would slip right.
In this problem, the end cannot slip right. As the angular velocity increases, the magnitude of centripetal force increases. And so does the leftward component. The hinge pulls to the left after a certain point.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/753832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |