Mathematics Stack Exchange News Feeds

  • What does holonomy measure?
    by exxxit8 on June 17, 2021 at 8:24 am

    I have difficulty understanding conceptually what holonomy measures. it can return a phase shift of the vector transported parallel along the connection. If there is no phase shift, it means that the connection is flat, and if there is no phase shift, then it should indicate that the space is curved. But I have found examples where a flat space can have non-trivial holonomy, for example a cone has non-trivial holonomy (see On a flat surface, can a holonomy can be nontrivial around certain curves). So my question: what information does holonomy give us? anything about the curvature?

  • Finding the bounds for $|e^z – 1|$ on unit circle.
    by E.E. on June 17, 2021 at 7:47 am

    The sharp upper bound is relatively easy to find: $$|e^z – 1| = \left|\sum_{n = 1}^\infty \frac{z^n}{n!} \right| \leq \sum_{n = 1}^\infty \frac{|z|^n}{n!} = e^{|z|} – 1 = e – 1$$ and it is attained at $z = 1$. I am wondering if there is a simple way to obtain a positive lower bound. I am suspecting a sharp lower bound is $(1 – 1/e)$ but I cannot prove it. I was told that $(3 – e)$ is a positive lower bound, but I could not prove it neither. Remark: One can do some horrible single variable calculus by writing $z = e^{it}$, where $t \in [0, 2\pi]$ but I would like to know if there is another (possibly much simpler) way to find a nontrivial lower bound. Edit: It seems that I got many answer like the following but they are deleted by the author very soon. I think it is a good idea to show a wrong attempt so I put it here: Putting $z = e^{it}$, then \begin{align*}|e^z – 1|^2 & = |e^{2 \cos t} – 2e^{\cos t} \cos (\sin t)) + 1| \\ & \geq |e^{\cos t} – 1|^2 \\ & \geq (1 – 1/e)^2 \end{align*} The last estimation is WRONG when $t = \pi/2$. (End of edit)

  • Behavior of $\sum_i^n (1-\frac{1}{i})^s$ as a function of $s$?
    by Yaroslav Bulatov on June 17, 2021 at 5:06 am

    I’m interested in behavior of the following sum as a function of $s$ $$\frac{1}{n}\sum_{i=1}^n \left(1-\frac{1}{i}\right)^s$$ For $n=1000$ and $s\in (1,10000)$, this seems almost linear on a log-plot (notebook), any tips how to model this analytically?

  • Who decided to use $(a, b)$ for open intervals and $[a, b]$ for closed intervals?
    by MJD on June 16, 2021 at 6:02 pm

    The use of $$(a,b)$$ as an abbreviation for $$\{x\in\Bbb R\mid a<x<b\}$$ and $$[a,b]$$ as an abbreviation for $$\{x\in\Bbb R\mid a≤x≤b\}$$ is so widespread and so entrenched that I was surprised when I realized it was essentially arbitrary. A student asked me what it was that way and I got halfway through mumbling something like “It’s because closed sets have sharp edges and open sets have fuzzy ones” before I realized that was nonsense: Under this theory $(a,b)$ for a closed interval makes as much sense as $[a,b]$, because closed discs and open rectangles are just as common as open discs and closed rectangles. If the two types of brackets had been switched, back at the beginning of time, I don’t think anyone would find have found it less intuitive, if this were the explanation. I think what I was getting at perhaps is that one can imagine that the $($ curves toward the endpoint, and then away from it again without quite getting there, whereas the $[$ goes directly to the endpoint and stays there for a while. I don’t know if that was the intended intuition. Or, indeed, if there was any intended intuition. It’s quite possible that whoever invented this notation needed two kinds of brackets and arbitrarily assigned one to each type of interval. Or perhaps one of the notations was already well-established, and much later someone else wanted an analogous notation for the other kind of interval, and simply used a different kind of bracket. Do we know anything about this? I did not find anything with a Math SE or MO search, and I also looked at Jeff Miller’s site, but did not find anything. I have not checked Cajori, but it does not really seem like the kind of thing he covers.

  • If $f,f_m$ are Lipschitz with $\lim_{m\to+\infty} d(f_m(x),f(x)) = 0 $, is it true that the fixed point of $f_m$ converges to the fixed point of $f$?
    by Ban on June 16, 2021 at 10:31 am

    I’m going through a proof in which at one point, the authors make the following statement : Let $(X,d)$ be a complete metric space and $f:X\to X$ and $f_m:X\to X$ Lipschitz with $lip(f)<1$ and $lip(f_m)<1\; \forall m\in\mathbb{N}$. Let $x^*_m$ the fixed point of $f_m$ and $x^*$ the fixed point of $f$. If for each $x\in X$ we have that $$\lim_{m\to+\infty} d(f_m(x),f(x)) = 0, $$ it follows that $$\lim_{m\to+\infty} d(x^*_m,x^*) = 0 .$$ I tried to figure out how they came up with this by an argument along the lines of \begin{align*}d(x^*_m,x^*) &= d(f_m(x^*_m),f(x^*)) \leq d(f_m(x^*_m),f(x_m^*))+ d(f(x_m^*),f(x^*))\\&\leq d(f_m(x^*_m),f(x_m^*))+lip(f)d(x_m^*,x^*). \end{align*} Then $$0\leq (1-lip(f))d(f(x_m^*),f(x^*))\leq d(f_m(x^*_m),f(x_m^*)).$$ But now we can’t apply the first limit because $x_m^*$ depends on $m$. Also another thing I tried is by using the fact that $$\lim_{n \to+\infty}d(f^{\circ n}(x),x^*)=0 \; \forall x\in X$$ Where $f^{\circ n} = \underbrace{f\circ f\circ \dots \circ f}_{\text {n times}}.$ Then for a fixed $x\in X$ \begin{equation*}d(x^*_m,x^*)\leq d(x^*_m,f^{\circ n}_m(x)) + d(f^{\circ n}_m(x),f^{\circ n}(x))+ d(f^{\circ n}(x),x^*),\; \forall n\in\mathbb{N} \end{equation*} Now let $\varepsilon > 0$ and set $N$ big enough so that $d(f^{\circ N}(x),x^*)<\varepsilon/3$ and $d(f_m^{\circ N}(x),x_m^*)< \varepsilon/3$. Then \begin{align*} d(x^*_m,x^*) &\leq d(x^*_m,f^{\circ N}_m(x)) + d(f^{\circ N}_m(x),f^{\circ N}(x))+ d(f^{\circ N}(x),x^*)\\&<2\varepsilon/3 + d(f^{\circ N}_m(x),f^{\circ N}(x)). \end{align*} Now it can be shown that $\lim\limits_{m\to+\infty} d(f^{\circ N}_m(x),f^{\circ N}(x)) =0$, so there is some $M$ such that $d(f^{\circ N}_m(x),f^{\circ N}(x)) <\varepsilon/3\; \forall m>M$, from which we get $$d(x^*_m,x^*)<\varepsilon.$$ But I feel like something is not right in my second attempt. Is something missing from the statement?

  • What is the definition of moduli space, in math vs in physics?
    by annie marie cœur on June 15, 2021 at 11:31 pm

    It is easy to find that there are many questions regarding moduli space on MSE: https://math.stackexchange.com/search?q=what+is+moduli+space But it seems to me that this phrase, moduli space, may mean many different things. For example, according to Wikipedia, the moduli space is used in physics to refer specifically to the moduli space of vacuum expectation values of a set of scalar fields, or to the moduli space of possible string backgrounds. Moduli spaces also appear in physics in topological field theory, where one can use Feynman path integrals to compute the intersection numbers of various algebraic moduli spaces. Moduli space occurs in the context of in algebraic geometry, as a geometric space (usually a scheme or an algebraic stack) whose points represent algebro-geometric objects of some fixed kind, or isomorphism classes of such objects. intersection theory of Riemann surface. moduli space of Calabi-Yau manifolds. Weil-Petersson metric of the moduli space of elliptic curves. So what is the definition of moduli space, in math vs in physics, after all these?

  • Explicit map from $\mathbb{N}^t \to \mathbb{N}$
    by Manoj upreti on June 15, 2021 at 8:50 am

    I know how to proof the following map from $\mathbb{N}\times \mathbb{N} \to \mathbb{N}$ is injective : $(n_1, n_2) \to \binom{n_{1}+n_{2}} 2 + n_{1}$. An obvious generalization of the above map from $\mathbb{N^t} \to \mathbb{N} $ seem like as follows: $(n_1, \dots, n_t) \to \sum_{j = 1}^t \binom{\sum_{i = 1}^j n_i} j$. I have a feeling that this generalized map from $\mathbb{N^t} \to \mathbb{N} $ is also injective. But I don’t know how to prove it. Geometrically it seems very difficult as compared to the proof for the case $t=2$. I think there should be some algebraic proof. A rough sketch of the proof or any suitable reference book where I can read proof for this general case will be a great help.

  • What sorts of (sets of) equations are “approximately compatible” with the $2$-sphere?
    by Noah Schweber on June 14, 2021 at 7:27 pm

    Given a metric space $\mathcal{X}=(X,d)$ and an equational theory (in the sense of universal algebra) $\mathsf{E}$, say that an approximate model of $\mathsf{E}$ on $\mathcal{X}$ is a sequence $(\mathcal{M}_i)_{i\in\mathbb{N}}$ of structures in the signature of $\mathsf{E}$ with domain $X$ such that: in each $\mathcal{M}_i$, each function symbol gets interpreted as a continuous function $X^{arity}\rightarrow X$ in the sense of $d$; and for each $i\in\mathbb{N}$, each equation $t(x_1,…,x_n)=s(x_1,…,x_n)$ in $\mathsf{E}$, and each $a_1,…,a_n\in X$, we have $d(t(a_1,…,a_n), s(a_1,…,a_n))<2^{-i}.$ Say that $\mathsf{E}$ is approximately compatible with $\mathcal{X}$ iff $\mathsf{E}$ has an approximate model on $\mathcal{X}$. Compare this with the notion of (genuine) compatibility discussed here, where there is no “error” permitted. In the above-linked paper, a generalization of Adams’ theorem that there is no group structure on the $n$-sphere for $n\not\in\{0,1,3,7\}$ is proved: for $n\not\in\{0,1,3,7\}$ there is no “interesting” algebraic structure compatible with the $n$-sphere at all. On the other hand, approximate compatibility seems like an extremely weak notion, and I don’t see any nontrivial examples of equational theories which are not approximately compatible with (for example) the $2$-sphere: Is there an equational theory $\mathsf{E}$ which has a model with more than one element but is not approximately compatible with $S^2$? Note that for equational theories, “has a model with more than one element” is equivalent to (for example) “has a model of size continuum,” so once we rule out the genuinely trivial theories there is no cardinality obstacle.

  • Why is the 0 exponent to any number always equal to 1?
    by Harry Iguana on June 14, 2021 at 6:46 pm

    let n = unknown number Any number^0 = 1? Why is that? What is the logic behind it? Is there a way to prove this?

  • How can I make this kind of proof rigorous?
    by Kieren MacMillan on June 14, 2021 at 6:24 pm

    I have transformed a Diophantine equation into the form $$ \frac{a-b-3}{a-b-1} = \frac{2(b-1)(a+1)}{a^2+b^2}, \tag{$\star$} $$ where $a > b \ge 1$ are integers. I want to prove that $a-b=3$ [thus forcing $b=1$]. It’s easy to show that $1 ≤ a-b ≤ 3$ has only the one desired solution. Now assuming $a-b ≥ 4$, I can say \begin{align} \frac{a-b-3}{a-b-1} \simeq 1, \end{align} and thus \begin{align} 1 &\simeq \frac{2(b-1)(a+1)}{a^2+b^2} \\ a^2+b^2 &\simeq 2(b-1)(a+1) \\ &= 2ab+2(b-a-1) \\ \therefore\quad (a-b)^2 &\simeq 2(b-a-1) ≤ 2(-4-1)=-10. \end{align} With a strict equality, this would clearly be impossible; QED. The proof might also be valid if I can qualify precisely what is meant by “$\simeq$”, such that the contradiction holds. QUESTION: How can I make this method of proof rigorous? EDIT: In case it helps, I have proven that $$ab = 4(256n^3-640n^2+533n-148) \qquad\text{and}\qquad a-b=16n-13$$ for some $n \ge 1$, and need to prove that $n=1$ is the only solution. Combining those two equations yields $$a^2+b^2=(8n-7)(256n^2-384n+145),$$ which has certain implications I might be able to leverage…

  • Show that $(a^2-b^2)(a^2-c^2)(b^2-c^2)$ is divisible by $12$
    by barista on June 14, 2021 at 5:04 pm

    Let $a,b,c\in\Bbb N$ such that $a>b>c$. Then $K:=(a^2-b^2)(a^2-c^2)(b^2-c^2)$ is divisible by $12$. My attempt : Since each $a,b,c$ are either even or odd, WLOG we may assume $a,b$ are both even or odd. For both cases, $a+b$ and $a-b$ are divisible by $2$ so $K$ is divisible by $4$. Note that any $n\in\Bbb N$ is one of $\overline{0},\overline{1},\overline{2}$ in $\operatorname{mod}3$. Well from this, I can argue anyway but I want to show $K$ is divisible by $3$ more easier or nicer way. Could you help?

  • Orthogonality of generalized eigenbases of self-adjoint operators
    by user3716267 on June 14, 2021 at 1:54 pm

    As mentioned in another of my recent questions, given some operator on a Hilbert space, we can construct a Gelfand triple in order to “recover” eigenfunctions corresponding to elements of the spectrum which do not strictly have corresponding eigenfunctions in the Hilbert space “proper.” For example, the momentum operator $i\frac{d}{dx}$ has no “proper” eigenfunctions in $L_2(\mathbb{R})$, but it does have sinusoidal eigenfunctions in the Gelfand triple $H^s(\mathbb{R}) \subseteq L_2(\mathbb{R}) \subseteq H^{-s}(\mathbb{R})$. In the finite-dimensional case, we have a theorem that every Hermitian operator has an orthogonal eigenbasis. Does this generalize to the infinite-dimensional case of self-adjoint operators? Immediately, we encounter a problem: the sinusoids $t \to e^{ikt}$ are not “proper” elements of $L_2(\mathbb{R})$ (they fail to be square-integrable). So, we cannot say precisely that these eigenfunctions are orthogonal with respect to the inner product on the Hilbert space. Can we state something weaker? Perhaps we can recover a notion of “limiting orthogonality” by approximating our eigenfunctions with sequences of functions that are elements of $L_2(\mathbb{R})$ (e.g. approximating a sinusoid by a sequence of sinc functions with slower and slower decay)? Maybe there’s some other approach entirely? Or is this a doomed endeavor, and there is no useful generalization to be found?

  • Evaluating the following integral : $\int_0^\infty \frac{\arctan(x)}{x^{\ln (x)+1}} \mathrm{d}x$ [closed]
    by Med-Elf on June 14, 2021 at 1:50 pm

    I want to evaluate the following integral : $$I=\int_0^\infty \frac{\arctan(x)}{x^{\ln (x)+1}} \mathrm{d}x$$ I made this transformation : $$ \begin{align} I&=\int_0^\infty \frac{\arctan(x)}{x^{\ln (x)+1}} \mathrm{d}x\\ &=\int_0^\infty \frac{\arctan(x)}{x}\frac{\mathrm{d}x}{x^{\ln (x)}} \end{align}$$ And now I get these two usual integrals my question is how can I use the results of these integrals, because obviously : $$\int fg \neq \int f \int g$$ If it’s impossible, is there any other method to evaluate this integral. And thanks in advance.

  • Godunova Levin Cebaevskaya Inequality
    by Rean on June 14, 2021 at 1:35 pm

    Let $f$ and $g$ be integrable on $[a,b]$. We want to prove that : For any non-decreasing $h$, $\int_a^b fh \ dx \le \int_a^b gh \ dx$ iff $\int_a^x g \le \int_a^x f$ and $\int_a^b g = \int_a^b f$. I put $F(x) = \int_a^x g – \int_a^x f \le 0$ so $F(b)=0$, but it is strange how multiplication by non-decreasing h cause reversing the inequality! With a small hint also I would be very grateful for I have a mental block with this and cant make no progress at all.

  • Is black hole pattern possible in Conway’s Game of Life that eats/clears everything?
    by VainMan on June 14, 2021 at 11:49 am

    Is black hole pattern possible in Conway’s Game of Life that eats/clears the infinite universe plane ? Formally, is there a pattern that satifies following requirements? The pattern has finite size. The universe plane has infinite size. Each cell outside the pattern is initially alive or dead randomly. That is it can contain all possible patterns, including other black holes if they are possible. For any finite region surrounding the pattern, all cells inside this region will be dead after finite generations and never back to life. Edit: To describe this question more intuitively: Imagine there’s a circle that always expanding larger and larger and leaving the region inside it empty, no matter what patterns from the outside it collides. Originally there was another rule: If two or more black holes collide then they will merge into a bigger black hole. It was deleted before posting because I think it’s already covered by the third and the fourth rule. In addition, it’s hard for me to define and describe the merging formally. Thanks to @Vepir and @PM2Ring for mentioning related superstable configuration question. I believe this question is an extended version of the superstable configuration question which is an open problem. Because if such a black hole exists, it must be a supertable or become a supertable after finite generations. Edit: After a bit further thinking, the metioned superstable configuration question inspired a simple but flawed answer about this question. See it below.

  • Direct proof that if $n^2$ is odd, $n$ is odd
    by Kman3 on June 13, 2021 at 9:25 pm

    I am rereading a book on methods of proof and thought I would try proving that if $n^2$ is odd, then $n$ is odd. The proofs for this that I have seen online mostly involve a proof by contrapositive. I was wondering if this could be done by direct proof instead. I found a direct proof in Mark Bennet’s answer to this question. It goes: Suppose $n^2$ is odd, then $n^2=2m−1$ and $(n+1)^2=2(m+n)$ Now $2$ is prime and $2∣(n+1)^2$ so $2∣n+1$ therefore $n+1=2r$ (for some integer r) whence $n=2r−1$ and $n$ is odd. I came up with a separate proof and I was wondering if it is logically sound: Suppose $n^2$ is odd, then $n^2=2m+1$ for some $m \in \mathbb{Z}$. $n^2=2m+1 \implies n^2-1 = 2m \implies (n+1)(n-1) = 2m$ This shows $(n+1)(n-1)$ is even; for this to be true, at least one of $n+1$ and $n-1$ must be even, which means that $n$ is odd. Is this proof written well? Does it have gaps? If there are problems with it, I would like to ensure that I do not make those same mistakes in the future.

  • Is $\curvearrowright$ a valid symbol for “implies that”?
    by user3187119 on June 13, 2021 at 6:47 pm

    I learned in high school from my favorite math teacher (who also has a PhD in mathematics) that the $\curvearrowright$ symbol means “implies that” (in German “daraus folgt”; “from that follows”). Now that I am learning higher math elsewhere I have not found this notation anywhere; it always seems to be the $\Rightarrow$ symbol. The $\curvearrowright$ symbol has really grown on me, and it takes much less time to draw than the commonly used $\Rightarrow$ symbol. I am just curious if $\curvearrowright$ is also a commonly accepted symbol? Maybe it’s an old DDR (communist Germany) thing – as that’s where my teacher received his PhD?

  • What is the benefit of defining a positive norm for vectors?
    by Ryder Rude on June 13, 2021 at 10:12 am

    I read that the reason we have the property $\langle A|B\rangle=\langle B|A\rangle^*$ is to make define a positive norm with the formula $\langle A|A\rangle$. But I do not understand how having this norm benefits us. I guess we’re doing this to make an analogy with arrows, which also have a positive norm. But this can’t be the only reason. After all, a lot of things which are true for arrows are not true for general vectors. For instance, angle values of $-2\pi$ to $2\pi$ are not carried over from arrows to general vectors. The formula for the angle between general vectors, $\cos \theta=\frac{\langle A|B\rangle}{|A||B|}$, can result in complex values of $\theta$. The commutativity of the inner product isn’t carried over from arrows to general vectors either (though this is the very reason a positive norm gets carried over). Keeping a positive norm for general vectors must be allowing us to carry over some nice properties from the world of arrows to general vectors. What are those nice things? Like, even if we drop this property, we’d still be able to prove the existence of an orthonormal basis, as Gram Schmidt does not require $\langle A|B\rangle=\langle B|A\rangle ^*$. So at least that stuff still works out. EDIT- I just realised that, while Gram Schmidt may not require $\langle V|V\rangle$ to be strictly positive, it does require $\langle V|V\rangle$ not to be 0 for non-zero vectors $|V\rangle$, because only then can we rescale the basis vectors by their norm to get a unit vector. EDIT- I also realised that the Cauchy Schwarz and Triangle Inequalities would no longer make sense without this norm. Maybe these are useful results too.

  • A prime numbers property: $4/\pi=e^{1/2}\Big(1+e^{1/3}\big(1+e^{1/5}(\dots)\big)\Big)/p_n, \ \ n\to\infty$
    by Patrick Danzi on June 13, 2021 at 10:04 am

    It is possible to show that the limit of the following fraction is a constant $$ \frac {e ^ {1/2} \Big (1 + e ^ {1/3} \big (1 + e ^ {1/5} (1 + e ^ {1/7} (\dots (1 + e ^ {1 / p (n)})} {p(n)} \sim c $$ How can I determine the value of the constant $ c $? $ \ $ From the prime number theorem we have that $ \pi (x) \sim \text {li} (x): = \int_0 ^ x \frac {dt} {\ln (t)} $, where $ \pi (x ) $ is the prime counting function. $ \pi (n) $ can be thought as the inverse of the function $ p (n) $ which gives the $n$-th prime number, such that $ p (n) \sim \text {ali} (x) = \text {li} ^ {- 1} (x) $. Calling $ \text {ali} (x) $ the inverse function of the logarithmic integral: $$\text{li}(\text{ali}(x))=x \ \ \Rightarrow \ \ \frac{d}{dx}\Big[\text{li}(\text{ali}(x))\Big]=1\Rightarrow \text{ali}'(x)=\ln(\text{ali}(x))$$ $$\text{ali}”(x)=\frac{d}{dx}\Big[\ln(\text{ali}(x))\Big]=\frac{\text{ali}'(x)}{\text{ali}(x)} \Rightarrow \text{ali}'(x)=\text{ali}(x) \text{ali}”(x)\sim p(x) \ \text{ali}”(x) $$ Solving the differential equation $ y ‘(x) = p (x) y’ ‘(x) $ we obtain: $$y(x)=c_2+c_1\int_{1}^{x}\exp \Big(\int_1^{t}\frac{du}{p(u)}\Big)dt$$ Interpreting the integral as the area under a broken line, we can write for the integer $ n $: $$y(n)=c_2+c_1\sum_{v=1}^n \exp\Big(\sum_{w=1}^{v}\frac{1}{p(w)}\Big)\sim p(n)$$ $$y(4)=c_2+c_1 \Big( e^{\frac{1}{2}}+e^{\frac{1}{2}+\frac{1}{3}}+e^{\frac{1}{2}+\frac{1}{3}+\frac{1}{5}}+e^{\frac{1}{2}+\frac{1}{3}+\frac{1}{5}+\frac{1}{7}}\Big)=c_2+c_1 \Big( e^{1/2}\Big(1+e^{1/3}\Big(1+e^{1/5}\Big(1+e^{1/7}\Big)\Big)\Big)\Big)\approx 7$$ This means that $ \big (y (n) -c_2 \big) / p (n) \sim c_1 $. Setting $ c_2 = 0 $, the trend of $ y (n) / p (n) $ is as follows: The orange line represents $ \frac {4} {\pi} $ but there is no reason for it to be the limit, I did it for clickbait :-\

  • In how many ways 5 different rings can be worn on 4 fingers?
    by Ayush Sambher on June 13, 2021 at 7:25 am

    Is the answer $4^5$ or $4 \times 5 \times 6 \times 7 \times 8$? The rationale behind $4 \times 5 \times 6 \times 7 \times 8$ is that first ring has four options, the second ring has $5$ options, since the second ring can come underneath the first, and so on. I think $4^5$ is correct since $4 \times 5 \times 6 \times 7 \times 8$ is over counting the number of possibilities.

  • Showing a series is finite a.e. given an $L^p$ function.
    by E.E. on June 12, 2021 at 10:59 pm

    Q: Suppose $f \in L^2 (\mathbb{R})$. Show that $\sum_{n = 1}^\infty \frac{f(x + n)}{n^{3/4}}$ is finite a.e. $x \in \mathbb{R}$. It seems that we have to show the sum is in $L^p (\mathbb{R})$ for some $p \geq 1$. Let’s try $p = 2$, applying Cauchy-Schwarz inequality, we have $$\int \left|\sum_{n = 1}^\infty \frac{f(x + n)}{n^{3/4}}\right|^2 dx \leq \left(\sum_{n = 1}^\infty \frac{1}{n^{3/2}}\right) \int \left(\sum_{n = 1}^\infty (f(x + n))^2\right) dx = C \sum_{n = 1}^\infty \|f\|_2^2 = \infty .$$ It seems that this estimate is too large. For $p = 1$, it seems that a similar situation occurs when CS-inequality is applied. I am thinking if we can show $$\int \left|\sum_{n = 1}^\infty \frac{f(x + n)}{n^{3/4}}\right| dx \leq \sum_{n = 1}^\infty \int \frac{[f(x + n)]^2}{n^{3/2}} dx,$$ then we are done. However I have no idea how to get this (or a similar) estimate.

  • Computing the limit $\lim_{k \to \infty} \int_0^k x^n \left(1 – \frac{x}{k} \right)^k \mathrm{d} x$ for fixed $n \in \mathbb{N}$
    by AJY on June 12, 2021 at 9:15 pm

    I’m working on a problem that asks to compute $$\lim_{k \to \infty} \int_0^k x^n \left(1 – \frac{x}{k} \right)^k \mathrm{d} x$$ for fixed $n \in \mathbb{N}$. What I’ve tried so far is to do a $u$-substitution for $u = \frac{x}{k}$, so I have $$\int_0^k x^n \left(1 – \frac{x}{k} \right)^k \mathrm{d} x = k^{n + 1} \int_0^1 u^n (1 – u)^k \mathrm{d} u .$$ Using the Binomial Theorem to break up the $(1 – u)^k$ term, I get $$k^{n + 1} \int_0^1 u^n (1 – u)^k \mathrm{d} u = k^{n + 1} \int_0^1 \sum_{j = 0}^k \binom{k}{j} \frac{(-1)^j}{n + j + 1} .$$ However, I don’t know how to compute the limit of this expression as $k \to \infty$. I assume that I should recognize it as some kind of Taylor series that’s somehow $O \left( k^{-(n + 1)} \right)$, but I’m not seeing it. Note: When looking at other posts, I found an integral that looked similar to this one, and the only answer on that post involved something called a beta function. I have never heard of a beta function, and would like to find a solution here that doesn’t rely on whatever a beta function is. Another idea I considered was to use the Dominated Convergence Theorem, since $e^{-x} = \lim_{k \to \infty} \left( 1 – \frac{x}{k} \right)^k$, so I figured I could use DCT to say that \begin{align*} \lim_{k \to \infty} \int_0^k x^n \left( 1 – \frac{x}{k} \right)^k \mathrm{d} x & = \lim_{k \to \infty} \int_0^\infty \chi_{[0, k]}(x) x^n \left( 1 – \frac{x}{k} \right)^k \mathrm{d} x \\ & = \int_0^\infty x^n e^{-x} \mathrm{d} x & (\textrm{DCT used here})\\ & = n ! , \end{align*} assuming I didn’t mess up any of my integration by parts. However, I couldn’t find a choice of dominator that would work on all of $[0, \infty)$, so I’d also be interested in a solution that uses DCT as well. Perhaps I’m being naive, but the pointwise limit is just so convenient that I have to imagine that DCT can be used here. EDIT: Thanks to some inspiration from a comment by user Mars Plastic, I thought to consider some other integral convergence theorems. I came up with this, which I think works. I’m still interested to see if the Dominated Convergence Theorem argument can be made to work, perhaps a bit more smoothly than this. Let $f_k(x) = \chi_{[0, k]}(x) x^n \left( 1 – \frac{x}{k} \right)^k$. I claim that the sequence is monotone increasing on $[0, \infty)$. Fix $x \in (0, \infty)$, and let $K = \lfloor k \rfloor + 1$, so that $K = \min \{ k \in \mathbb{N} : f_k(x) \neq 0 \}$. Obviously if $k < K$, then $f_k(x) = 0 \leq f_{k + 1}(x)$. So consider the case where $k \geq K$. Then $f_k(x) = x^n \left(1 – \frac{x}{k} \right)^k$, and based on answers to this question, it seems this sequence would be monotone increasing. Therefore, I can apply the Monotone Convergence Theorem to say that $f_k(x) \nearrow x^n e^{-x}$, so $$\int_0^k x^n \left( 1 – \frac{x}{k} \right)^k \mathrm{d} x = \int_0^\infty f_k(x) \mathrm{d} x = \int_0^\infty x^n e^{-x} = n! .$$

  • For a normed vector space, is $\|x-y\| \leq \|x\|+\|y\|$ true?
    by Mario on June 12, 2021 at 8:41 pm

    I have a question about an inequality in normed vector spaces and I want to know if my proof is correct. Claim: Let $X$ be a normed vector space. Then \begin{equation} \|x-y\| \leq \|x\|+\|y\|\end{equation} for all $x,y \in X$. Proof: Using the triangle inequality and the fact that $\|z\|=\|-z\|$, we have \begin{equation} \begin{split} \|x-y\|&= \|x+(-y)\|\\ &\leq \|x\|+\|-y\| \\ &= \|x\|+\|y\| \end{split} \end{equation} Thanks for your answers!

  • On the linearity of metric projections
    by Evangelopoulos F. on June 12, 2021 at 5:19 pm

    Let $X$ be a reflexive and strictly convex Banach space. If $V$ is a closed subspace of $X$, then for each $x \in X$ there exists a unique vector $P_V(x) \in V$ that solves the feasibility problem $ \inf_{v \in V} ||x-v|| $. The map $P_V \colon X \to X , ~ x \mapsto P_V(x) $ is called the metric projection onto $V$. In general, such a map need not be linear. However, in the case of Hilbert spaces, the metric projections coincides with the orthogonal projection onto $V$, which is linear. I was wondering if Hilbert spaces are the only ones with the property that each of its metric projection onto closed subspaces is linear. To be more exact, is the following statement true? Let $X$ be a reflexive and strictly convex Banach space. Suppose that each of its metric projections onto closed subspaces is linear. Then $X$ is a Hilbert space. Any help is appreciated!

  • Outer automorphisms of extraspecial groups in GAP
    by unknown on June 11, 2021 at 3:36 pm

    Let $G_n$ be the extraspecial group of order $2^{1+2n}$. Its outer automorphism group is known to be isomorphic to the general orthogonal group $GO(2n)$. I’d like to get an explicit map of this isomorphism in gap. Here’s what I tried so far: n:=4; grp:=ExtraspecialGroup(2^(2*n+1),”+”); aut:=AutomorphismGroup(grp); inn:=InnerAutomorphismsAutomorphismGroup(aut); ort:=GeneralOrthogonalGroup(+1,2*n,2); Print(“|out| = “,Size(aut)/Size(inn),”\n”); Print(“|ort| = “,Size(ort),”\n”); gen_aut:=GeneratorsOfGroup(aut); gen_ort:=GeneratorsOfGroup(ort); Print(“generators of aut = “,Length(gen_aut),”\n”); Print(“generators of ort = “,Length(gen_ort),”\n”); |out| = 348364800 |ort| = 348364800 generators of aut = 10 generators of ort = 3 The size of the two groups (out and ort) matches as expected, but I’m not sure how to proceed from here. How would you define the outer automorphism group in GAP and how would you align its generators to those of the orthogonal group and find the isomorphism.

  • Find the angle or prove it is constant
    by Chen Aavaz on June 11, 2021 at 12:25 pm

    We are given a regular pentagon ABCDE (with the letters in clockwise sequence). A point Q moves on side BC and we draw a circle with center Q which passes through the vertex A and intersects the diagonal BE in point R. Prove that angle AQR is constant. If $s$ is the side of the regular pentagon, then the length of its diagonal is $d = s*\frac {\sqrt5+1}{2}$. Then, length of segment RE is $d – s = s*\frac {\sqrt5-1}{2}$. Also if Q coincides with B, then the triangle ABR is a “golden” triangle and its base side $t$ is $\frac {s}{φ}$ where $φ$ is the golden ratio. So $t = RE$ and the angle $ARE = 180^\circ-72^\circ = 108^\circ$. So AR and RE are 2 sides of a regular pentagon. So, angle $AQR = 36^\circ$. Edit: I added the diagram, as you asked. The pink polygon is not part of the problem – I just sketched it in my attempt to prove the question. I also tried to prove that quadrilateral $ABQR$ is cyclic, in which case, angle $α$ would be equal to $ABE$, which we know is $36^\circ$. Is this a sufficient proof?

  • Complex integration to solve real integral $\int_{0}^{2}\frac{\sqrt{x\left(2-x\right)}}{\left(x+1\right)^{2}\left(3-x\right)}dx.$
    by MathLearner on June 11, 2021 at 8:55 am

    I have to solve this integral $$\int_{0}^{2}\frac{\sqrt{x\left(2-x\right)}}{\left(x+1\right)^{2}\left(3-x\right)}dx.$$ Thus, I formed the following contour: By Chauchy theorem I have $$I=\int_{C_{R}}f\left(z\right)dz+\int_{C_{r1}}f\left(z\right)dz+\int_{k_{\epsilon}}f\left(z\right)dz+\int_{z_{1}}f\left(z\right)dz+\int_{k_{\rho}}f\left(z\right)dz+\int_{C_{r2}}f\left(z\right)dz+\int_{z_{2}}f\left(z\right)dz=0$$ Now I use Jordan’s lemma to calculate the values over small circles. I have $$\lim_{z\rightarrow0}z\cdot\frac{\sqrt{z\left(2-z\right)}}{\left(z+1\right)^{2}\left(3-x\right)}=0\,\,\Rightarrow\,\,\lim_{r\rightarrow0}\int_{k_{\rho}}f\left(z\right)dz=0$$ $$\lim_{z\rightarrow3}\left(z-3\right)\cdot\frac{\sqrt{z\left(2-x\right)}}{\left(z+1\right)^{2}\left(3-z\right)}=0 \Rightarrow\,\,\lim_{r\rightarrow0}\int_{C_{r1}}f\left(z\right)dz=0$$ Now I have the problem with term $(x+1)^2$ because I cannot apply Jordan’s lemma because I cannot find the limit. $$\lim_{z\rightarrow-1}\left(z+1\right)\cdot\frac{\sqrt{z\left(2-z\right)}}{\left(z+1\right)^{2}\left(3-z\right)}=?$$ Does anyone have idea how to solve this? I think lemma cannot be applied here but I don’t know the other way. I also found the function is regular in infinity thus integral over $C_{R}$ is zero.

  • Find $\int _0^1\frac{12\arctan ^2 x\ln (\frac{(1-x)^2}{1+x^2})-\ln ^3(\frac{(1-x)^2}{1+x^2})}{x}\:dx$
    by mattsteiner64 on June 11, 2021 at 3:57 am

    I want to find and prove that: $$\int _0^1\frac{12\arctan ^2\left(x\right)\ln \left(\frac{\left(1-x\right)^2}{1+x^2}\right)-\ln ^3\left(\frac{\left(1-x\right)^2}{1+x^2}\right)}{x}\:dx=\frac{9 \pi ^4}{16}$$ I tried to split the integral: $$=12\int _0^1\frac{\arctan ^2\left(x\right)\ln \left(\frac{\left(1-x\right)^2}{1+x^2}\right)}{x}\:dx+12\int _0^1\frac{\ln ^2\left(1-x\right)\ln \left(1+x^2\right)}{x}\:dx$$ $$-6\int _0^1\frac{\ln \left(1-x\right)\ln ^2\left(1+x^2\right)}{x}\:dx+\int _0^1\frac{\ln ^3\left(1+x^2\right)}{x}\:dx-8\int _0^1\frac{\ln ^3\left(1-x\right)}{x}\:dx$$ But the first $3$ integrals are very tough, expanding some terms yield very complicated series. I also attempted to integrate by parts but it was rather messy and I’d prefer not to put it here. I have faith that there must be a trick for finding this one because of its closed forms simplicity. Note $2$: I found this integral in a certain mathematics group, I’ve ask for hints but received none.

  • What mapping does Escher use?
    by rihartley on June 11, 2021 at 3:55 am

    In Escher’s hyperbolic tesselations, he takes (effectively) a tesselation of the plane and maps it to a tesselation of the unit disk, by a mapping that takes straight lines to circles meeting the disk boundary at right angles. What precisely is this mapping (with a formula)? I am aware that this is effectively a mapping from the hyperbolic plane to the Poincare disk, taking a mapping from a tesselation of the hyperbolic plane to the disk. However, what is it as a map from $R^2$ to $D$, with formula. I believe that it is a conformal map. What is the correct formula in higher dimensions, also?

  • Understanding a basic permutation problem
    by Rimov on June 10, 2021 at 6:11 pm

    I am failing to wrap my head around a solution to a rather basic problem: At a party, n men and m women put their drinks on a table and go out on the floor to dance. When they return, none of them recognizes his or her drink, so everyone takes a drink at random. What is the probability that each man selects his own drink? The solution is: $$\frac{m!}{(n+m)!}$$ From what I intuit, $n!$ is the total permutations of the drinks among the men, and the above solution is the probability that all men get another man’s drink, not necessarily that each one gets his own. My solution was, there is a $\frac{1}{n+m}$ probability of the first man getting his drink, then $\frac{1}{n+m-1}$ for the second, $\frac{1}{n+m-2}$ for the third, … , $\frac{1}{m+1}$ for the $n^{th}$. Resulting in a probability of $\prod\limits_{i=0}^{n-1} \frac{1}{n+m-i}$. Can someone explain their thinking when approaching this problem? EDIT Issue arose from confusing variables and failing to identify that $$ \begin{align} \frac{m!}{(n+m)!} &= \frac{m!}{(n+m)(n+m-1)…(n+m-(n-1))(m!)}\\ &= \prod\limits_{i=0}^{n-1} \frac{1}{n+m-i}\\ \end{align} $$