Ramanujan's radical and how we define an infinite nested radicalProof of an equality involving cosine $sqrt{2...

Could Comets or Meteors be used to Combat Global Warming?

Why is quixotic not Quixotic (a proper adjective)?

Does an intelligent undead have a soul in 5e D&D?

Ramanujan's radical and how we define an infinite nested radical

What dissuades people from lying about where they live in order to reduce state income taxes?

How can I make my enemies feel real and make combat more engaging?

Why is Bernie Sanders maximum accepted donation on actblue $5600?

Manager has noticed coworker's excessive breaks. Should I warn him?

Is it common to refer to someone as "Prof. Dr. [LastName]"?

Stream.findFirst different than Optional.of?

Almost normal subgroup

How can changes in personality/values of a person who turned into a vampire be explained?

The Longest Chess Game

Arizona laws regarding ownership of ground glassware for chemistry usage

Sing Baby Shark

How should I ship cards?

How to know if I am a 'Real Developer'

Where can I educate myself on D&D universe lore, specifically on vampires and supernatural monsters?

Reading source code and extracting json from a url

Identical projects by students at two different colleges: still plagiarism?

Is layered encryption more secure than long passwords?

Why does finding small effects in large studies indicate publication bias?

How can guns be countered by melee combat without raw-ability or exceptional explanations?

The Late Queen Gives in to Remorse - Reverse Hangman



Ramanujan's radical and how we define an infinite nested radical


Proof of an equality involving cosine $sqrt{2 + sqrt{2 + cdots + sqrt{2 + sqrt{2}}}} = 2cos (pi/2^{n+1})$Convergence of alternating nested radicalsEquality of nested radicals with different operationsIncorrect proposal to dealing with nested radicals?Ramanujan's infinitely nested radical clarificationProving Infinite Nested RadicalInfinitely nested radical expansions for real numbersDistribution of infinite nested radicals with random termsVariation to Ramanujan's infinite nested radical with primesIs The Infinite Nested Radical $sqrt{1+sqrt{x+sqrt{x^2+sqrt{x^3+…}}}}$ Analytical?How do we find the general term from an infinite nested radical













31












$begingroup$


I know it is true that we have



$$sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}=3$$



The argument is to break the nested radical into something like



$$3 = sqrt{9}=sqrt{1+2sqrt{16}}=sqrt{1+2sqrt{1+3sqrt{25}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



However, I am not convinced. I can do something like



$$4 = sqrt{16}=sqrt{1+2sqrt{56.25}}=sqrt{1+2sqrt{1+3sqrt{frac{48841}{144}}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



Something must be wrong and and the reason behind should be misunderstanding of how we define infinte nested radical in the form of
$$ sqrt{a_{0}+a_{1}sqrt{a_{2}+a_{3}sqrt{a_{4}+a_{5}sqrt{a_{6}+cdots}}}} $$
I researched for a while but all I could find was computation tricks but not strict definition. Really need help here. Thanks.










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    Surely the rigorous definition is that it is the limit of the finite expressions that you get when you drop the $+ cdots$, provided that this limit exists.
    $endgroup$
    – John Coleman
    yesterday












  • $begingroup$
    So you are suggesting $sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}} $ actually has no universally-agreed definition in Maths?
    $endgroup$
    – Anson NG
    yesterday






  • 6




    $begingroup$
    When you write something down, you stop your writing at some stage and break into $+ cdots$. What you have written is a purely finite expression. Think of these finite expressions as terms in a sequence. What do these terms approach?
    $endgroup$
    – John Coleman
    yesterday






  • 2




    $begingroup$
    I understand your comment. Given a general term of partial sum, we can find it limit although it might not exist. We can do it because we all universally agree what it means by a general term. Sometimes the dots is clear in the sense that we would not argue its meaning. For example,comsider $ 1+2+3+4+5+...$. Someone could still argue that the 6th term can be any number and does not have to be 6 but it's kind of agreed that we will look at the pattern and take the 6th term as 6.
    $endgroup$
    – Anson NG
    yesterday








  • 1




    $begingroup$
    This is interesting since you can redefine any number this way as you pointed out. We need to think of infinite nested root with more careful definition.. Maybe it's only equal to 3 if we don't have to use fraction to expand out more nested root. That is, in the Ramanujan's sum, when you expand the nested root, they are all in term of integers. Where as, for other number, you will get fraction like term. Similar to what you have... just a thought!
    $endgroup$
    – user209663
    yesterday


















31












$begingroup$


I know it is true that we have



$$sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}=3$$



The argument is to break the nested radical into something like



$$3 = sqrt{9}=sqrt{1+2sqrt{16}}=sqrt{1+2sqrt{1+3sqrt{25}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



However, I am not convinced. I can do something like



$$4 = sqrt{16}=sqrt{1+2sqrt{56.25}}=sqrt{1+2sqrt{1+3sqrt{frac{48841}{144}}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



Something must be wrong and and the reason behind should be misunderstanding of how we define infinte nested radical in the form of
$$ sqrt{a_{0}+a_{1}sqrt{a_{2}+a_{3}sqrt{a_{4}+a_{5}sqrt{a_{6}+cdots}}}} $$
I researched for a while but all I could find was computation tricks but not strict definition. Really need help here. Thanks.










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    Surely the rigorous definition is that it is the limit of the finite expressions that you get when you drop the $+ cdots$, provided that this limit exists.
    $endgroup$
    – John Coleman
    yesterday












  • $begingroup$
    So you are suggesting $sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}} $ actually has no universally-agreed definition in Maths?
    $endgroup$
    – Anson NG
    yesterday






  • 6




    $begingroup$
    When you write something down, you stop your writing at some stage and break into $+ cdots$. What you have written is a purely finite expression. Think of these finite expressions as terms in a sequence. What do these terms approach?
    $endgroup$
    – John Coleman
    yesterday






  • 2




    $begingroup$
    I understand your comment. Given a general term of partial sum, we can find it limit although it might not exist. We can do it because we all universally agree what it means by a general term. Sometimes the dots is clear in the sense that we would not argue its meaning. For example,comsider $ 1+2+3+4+5+...$. Someone could still argue that the 6th term can be any number and does not have to be 6 but it's kind of agreed that we will look at the pattern and take the 6th term as 6.
    $endgroup$
    – Anson NG
    yesterday








  • 1




    $begingroup$
    This is interesting since you can redefine any number this way as you pointed out. We need to think of infinite nested root with more careful definition.. Maybe it's only equal to 3 if we don't have to use fraction to expand out more nested root. That is, in the Ramanujan's sum, when you expand the nested root, they are all in term of integers. Where as, for other number, you will get fraction like term. Similar to what you have... just a thought!
    $endgroup$
    – user209663
    yesterday
















31












31








31


8



$begingroup$


I know it is true that we have



$$sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}=3$$



The argument is to break the nested radical into something like



$$3 = sqrt{9}=sqrt{1+2sqrt{16}}=sqrt{1+2sqrt{1+3sqrt{25}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



However, I am not convinced. I can do something like



$$4 = sqrt{16}=sqrt{1+2sqrt{56.25}}=sqrt{1+2sqrt{1+3sqrt{frac{48841}{144}}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



Something must be wrong and and the reason behind should be misunderstanding of how we define infinte nested radical in the form of
$$ sqrt{a_{0}+a_{1}sqrt{a_{2}+a_{3}sqrt{a_{4}+a_{5}sqrt{a_{6}+cdots}}}} $$
I researched for a while but all I could find was computation tricks but not strict definition. Really need help here. Thanks.










share|cite|improve this question











$endgroup$




I know it is true that we have



$$sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}=3$$



The argument is to break the nested radical into something like



$$3 = sqrt{9}=sqrt{1+2sqrt{16}}=sqrt{1+2sqrt{1+3sqrt{25}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



However, I am not convinced. I can do something like



$$4 = sqrt{16}=sqrt{1+2sqrt{56.25}}=sqrt{1+2sqrt{1+3sqrt{frac{48841}{144}}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



Something must be wrong and and the reason behind should be misunderstanding of how we define infinte nested radical in the form of
$$ sqrt{a_{0}+a_{1}sqrt{a_{2}+a_{3}sqrt{a_{4}+a_{5}sqrt{a_{6}+cdots}}}} $$
I researched for a while but all I could find was computation tricks but not strict definition. Really need help here. Thanks.







real-analysis sequences-and-series elementary-number-theory convergence nested-radicals






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited yesterday









Eevee Trainer

6,53811237




6,53811237










asked yesterday









Anson NGAnson NG

18819




18819








  • 2




    $begingroup$
    Surely the rigorous definition is that it is the limit of the finite expressions that you get when you drop the $+ cdots$, provided that this limit exists.
    $endgroup$
    – John Coleman
    yesterday












  • $begingroup$
    So you are suggesting $sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}} $ actually has no universally-agreed definition in Maths?
    $endgroup$
    – Anson NG
    yesterday






  • 6




    $begingroup$
    When you write something down, you stop your writing at some stage and break into $+ cdots$. What you have written is a purely finite expression. Think of these finite expressions as terms in a sequence. What do these terms approach?
    $endgroup$
    – John Coleman
    yesterday






  • 2




    $begingroup$
    I understand your comment. Given a general term of partial sum, we can find it limit although it might not exist. We can do it because we all universally agree what it means by a general term. Sometimes the dots is clear in the sense that we would not argue its meaning. For example,comsider $ 1+2+3+4+5+...$. Someone could still argue that the 6th term can be any number and does not have to be 6 but it's kind of agreed that we will look at the pattern and take the 6th term as 6.
    $endgroup$
    – Anson NG
    yesterday








  • 1




    $begingroup$
    This is interesting since you can redefine any number this way as you pointed out. We need to think of infinite nested root with more careful definition.. Maybe it's only equal to 3 if we don't have to use fraction to expand out more nested root. That is, in the Ramanujan's sum, when you expand the nested root, they are all in term of integers. Where as, for other number, you will get fraction like term. Similar to what you have... just a thought!
    $endgroup$
    – user209663
    yesterday
















  • 2




    $begingroup$
    Surely the rigorous definition is that it is the limit of the finite expressions that you get when you drop the $+ cdots$, provided that this limit exists.
    $endgroup$
    – John Coleman
    yesterday












  • $begingroup$
    So you are suggesting $sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}} $ actually has no universally-agreed definition in Maths?
    $endgroup$
    – Anson NG
    yesterday






  • 6




    $begingroup$
    When you write something down, you stop your writing at some stage and break into $+ cdots$. What you have written is a purely finite expression. Think of these finite expressions as terms in a sequence. What do these terms approach?
    $endgroup$
    – John Coleman
    yesterday






  • 2




    $begingroup$
    I understand your comment. Given a general term of partial sum, we can find it limit although it might not exist. We can do it because we all universally agree what it means by a general term. Sometimes the dots is clear in the sense that we would not argue its meaning. For example,comsider $ 1+2+3+4+5+...$. Someone could still argue that the 6th term can be any number and does not have to be 6 but it's kind of agreed that we will look at the pattern and take the 6th term as 6.
    $endgroup$
    – Anson NG
    yesterday








  • 1




    $begingroup$
    This is interesting since you can redefine any number this way as you pointed out. We need to think of infinite nested root with more careful definition.. Maybe it's only equal to 3 if we don't have to use fraction to expand out more nested root. That is, in the Ramanujan's sum, when you expand the nested root, they are all in term of integers. Where as, for other number, you will get fraction like term. Similar to what you have... just a thought!
    $endgroup$
    – user209663
    yesterday










2




2




$begingroup$
Surely the rigorous definition is that it is the limit of the finite expressions that you get when you drop the $+ cdots$, provided that this limit exists.
$endgroup$
– John Coleman
yesterday






$begingroup$
Surely the rigorous definition is that it is the limit of the finite expressions that you get when you drop the $+ cdots$, provided that this limit exists.
$endgroup$
– John Coleman
yesterday














$begingroup$
So you are suggesting $sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}} $ actually has no universally-agreed definition in Maths?
$endgroup$
– Anson NG
yesterday




$begingroup$
So you are suggesting $sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}} $ actually has no universally-agreed definition in Maths?
$endgroup$
– Anson NG
yesterday




6




6




$begingroup$
When you write something down, you stop your writing at some stage and break into $+ cdots$. What you have written is a purely finite expression. Think of these finite expressions as terms in a sequence. What do these terms approach?
$endgroup$
– John Coleman
yesterday




$begingroup$
When you write something down, you stop your writing at some stage and break into $+ cdots$. What you have written is a purely finite expression. Think of these finite expressions as terms in a sequence. What do these terms approach?
$endgroup$
– John Coleman
yesterday




2




2




$begingroup$
I understand your comment. Given a general term of partial sum, we can find it limit although it might not exist. We can do it because we all universally agree what it means by a general term. Sometimes the dots is clear in the sense that we would not argue its meaning. For example,comsider $ 1+2+3+4+5+...$. Someone could still argue that the 6th term can be any number and does not have to be 6 but it's kind of agreed that we will look at the pattern and take the 6th term as 6.
$endgroup$
– Anson NG
yesterday






$begingroup$
I understand your comment. Given a general term of partial sum, we can find it limit although it might not exist. We can do it because we all universally agree what it means by a general term. Sometimes the dots is clear in the sense that we would not argue its meaning. For example,comsider $ 1+2+3+4+5+...$. Someone could still argue that the 6th term can be any number and does not have to be 6 but it's kind of agreed that we will look at the pattern and take the 6th term as 6.
$endgroup$
– Anson NG
yesterday






1




1




$begingroup$
This is interesting since you can redefine any number this way as you pointed out. We need to think of infinite nested root with more careful definition.. Maybe it's only equal to 3 if we don't have to use fraction to expand out more nested root. That is, in the Ramanujan's sum, when you expand the nested root, they are all in term of integers. Where as, for other number, you will get fraction like term. Similar to what you have... just a thought!
$endgroup$
– user209663
yesterday






$begingroup$
This is interesting since you can redefine any number this way as you pointed out. We need to think of infinite nested root with more careful definition.. Maybe it's only equal to 3 if we don't have to use fraction to expand out more nested root. That is, in the Ramanujan's sum, when you expand the nested root, they are all in term of integers. Where as, for other number, you will get fraction like term. Similar to what you have... just a thought!
$endgroup$
– user209663
yesterday












5 Answers
5






active

oldest

votes


















44












$begingroup$

Introduction:



The issue is what "..." really "represents."



Typically we use it as a sort of shorthand, as if to say "look, I can't write infinitely many things down, just assume that the obvious pattern holds and goes on infinitely."



This idea holds for all sorts of things - nested radicals, infinite sums, continued fractions, infinite sequences, etc.





On Infinite Sums:



A simple example: the sum of the reciprocals of squares:



$$1 + frac{1}{4} + frac{1}{9} + frac{1}{16} + ...$$



This is a well known summation. It is the Riemann zeta function $zeta(s)$ at $s=2$, and is known to evaluate to $pi^2/6$ (proved by Euler and known as the Basel problem).



Another, easier-to-handle summation is the geometric sum



$$1 + frac 1 2 + frac 1 4 + frac 1 8 + ...$$



This is a geometric series where the ratio is $1/2$ - each summand is half the previous one. We know, too, that this evaluates to $2$.



Another geometric series you might see in proofs that $0.999... = 1$ is



$$9 left( frac{1}{10} + frac{1}{100} + frac{1}{1,000} + frac{1}{10,000} + ... right)$$



which equals $1$. In fact, any infinite geometric series, with first term $a$ and ratio $|r|<1$ can be evaluated by



$$sum_{n=0}^infty ar^n = frac{a}{1-r}$$



So a question arises - ignoring these "obvious" results (depending on your amount of mathematical knowledge), how would we know these converge to the given values? What, exactly, does it mean for a summation to converge to a number or equal a number? For finite sums this is no issue - if nothing else, we could add up each number manually, but we can't just add up every number from a set of infinitely-many numbers.



Well, one could argue by common sense that, if the sequence converges to some number, the more and more terms you add up, the closer they'll get to that number.



So we obtain one definition for the convergence of an infinite sum. Consider a sequence where the $n^{th}$ term is defined by the sum of the first $n$ terms in the sequence. To introduce some symbols, suppose we're trying to find the sum



$$sum_{k=1}^infty x_k = x_1 + x_2 + x_3 + x_4 + ...$$



for whatever these $x_i$'s are. Then define these so-called "partial sums" of this by a function $S(n)$:



$$S(n) = sum_{k=1}^n x_k = x_1 + x_2 + ... + x_n$$



Then we get a sequence of sums:



$$S(1), S(2), S(3), ...$$



or equivalently



$$x_1 ;;,;; x_1 + x_2;;,;; x_1 + x_2 + x_3;;,;; ...$$



Then we ask: what does $S(n)$ approach as $n$ grows without bound, if anything at all? (In calculus, we call this "the limit of the partial sums $S(n)$ as $n$ approaches infinity.")



For the case of our first geometric sum, we immediately see the sequence of partial sums



$$1, frac{3}{2}, frac{7}{4}, frac{15}{8},...$$



Clearly, this suggests a pattern - and if you want to, you can go ahead and prove it, I won't do so here for brevity's sake. The pattern is that the $n^{th}$ term of the sequence is



$$S(n) = frac{2^{(n+1)}-1}{2^{n}}$$



We can then easily consider the limit of these partial sums:



$$lim_{ntoinfty} S(n) = lim_{ntoinfty} frac{2^{(n+1)}-1}{2^{n}} = lim_{ntoinfty} frac{2^{(n+1)}}{2^{n}} - frac {1}{2^{n}} = lim_{ntoinfty} 2 - lim_{ntoinfty} frac{1}{2^{n}}$$



Obviously, $1/2^{n} to 0$ as $n$ grows without bound, and $2$ is not affected by $n$, so we conclude



$$lim_{ntoinfty} S(n) = lim_{ntoinfty} 2 - lim_{ntoinfty} frac{1}{2^n} = 2 - 0 = 2$$



And thus we say



$$sum_{k=0}^infty left(frac 1 2 right)^k = 1 + frac 1 2 + frac 1 4 + frac 1 8 + ... = 2$$



because the partial sums approach $2$.





On Continued Fractions:



That was a simple, "first" sort of example, but mathematicians essentially do the same thing in other contexts. I want to touch on one more such context before we deal with the radical case, just to nail that point home.



In this case, it will be with continued fractions. One of the simpler such fractions is the one for $1$:



$$1 = frac{1}{2-frac{1}{2-frac{1}{2-frac{1}{...}}}}$$



As usual, the "..." denotes that this continues forever. But what it does it mean for this infinite expression to equal $1$?



For this, we consider a more general analogue of the "partial sum" from before - a "convergent." We cut up the sequence at logical finite points, whatever those points being depending on the context. Then if the sequence of the convergents approaches a limit, we say they're equal.



What are the convergents for a continued fraction? By convention, we cut off just before the start of the next fraction. That is, in the continued fraction for $1$, we cut off at the $n^{th} ; 2$ for the $n^{th}$ convergent, and ignore what follows. So we get the sequence of convergents



$$frac{1}{2} , frac{1}{2-frac{1}{2}}, frac{1}{2-frac{1}{2-frac{1}{2}}},...$$



Working out the numbers, we find the sequence to be



$$frac{1}{2},frac{2}{3},frac{3}{4},...$$



Again, we see a pattern! The $n^{th}$ term of the sequence is clearly of the form



$$frac{n-1}{n}$$



Let $C(n)$ be a function denoting the $n^{th}$ convergent. Then $C(1)=1/2,$ $C(2) = 2/3,$ $C(n)=(n-1)/n,$ and so on. So like before we consider the infinite limit:



$$lim_{ntoinfty} C(n) = lim_{ntoinfty} frac{n-1}{n} = lim_{ntoinfty} 1 - frac 1 n = lim_{ntoinfty} 1 - lim_{ntoinfty} frac 1 n = 1 - 0 = 1$$



Thus we can conclude that the continued fraction equals $1$, because its sequence of convergents equals $1$!





On Infinite Radicals:



So now, we touch on infinite nested radicals. They're messier to deal with but doable.



One of the simpler examples of such radicals to contend with is



$$2 = sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +...}}}}}$$



As with the previous two cases we see an infinite expression. We instinctively conclude by now: to logically define a limit for this expression - to assign it a value provided it even exists - we need to chop this up at finite points, defining a sequence of convergents $C(n)$, and then find $C(n)$ as $ntoinfty$.



Nested radicals are a lot messier than the previous, but we manage.



So first let the sequence of convergents be given by cutting off everything after the $n^{th} ; 2$ in the expression. Thus we get the sequence



$$sqrt 2 ;;,;; sqrt{2 + sqrt{2}};;,;; sqrt{2+sqrt{2+sqrt{2}}};;,;; sqrt{2+sqrt{2+sqrt{2+sqrt{2}}}}$$



Okay this isn't particularly nice already, but apparently there does exist, shockingly enough, a closed-form explicit expression for $C(n)$: (from: S. Zimmerman, C. Ho)



$$C(n) = 2cosleft(frac{pi}{2^{n+1}}right)$$



(I had to find that expression by Googling, I honestly didn't know that offhand. It can be proved by induction, as touched on in this MSE question.)



So luckily, then, we can find the limit of $C(n)$:



$$lim_{ntoinfty} C(n) = lim_{ntoinfty} 2cosleft(frac{pi}{2^{n+1}}right)$$



It is probably obvious that the argument of the cosine function approaches $0$ as $n$ grows without bound, and thus



$$lim_{ntoinfty} C(n) = lim_{ntoinfty} 2cosleft(frac{pi}{2^{n+1}}right) = 2cos(0) = 2cdot 1 = 2$$



Thus, since its convergents approach $2$, we can conclude that



$$2 = sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +...}}}}}$$





A Lengthy Conclusion:



So, in short, how do we evaluate an infinite expression, be it radical, continued fraction, sum, or otherwise?



We begin by truncating the expression at convenient finite places, creating a series of convergents, generalizations of the "partial sums" introduced in calculus. We then try to get a closed form or some other usable expression for the convergents $C(n)$, and consider the value as $ntoinfty$. If it converges to some value, we say that the expression is in fact equal to that value. If it doesn't, then the expression doesn't converge to any value.



This doesn't mean each expression is "nice." Radical expressions in particular, in my experience, tend to be nasty as all hell, and I'm lucky I found that one closed form expression for the particularly easy radical I chose.



This doesn't mean that other methods cannot be used to find the values, so long as there's some sort of logical justification for said method. For example, there is a justification for the formula for an infinite (and finite) geometric sum. We might have to circumvent the notion of partial sums entirely, or at least it might be convenient to do so. For example, with the Basel problem, Euler's proof focused on Maclaurin series, and none of this "convergent" stuff. (That proof is here plus other proofs of it!)



Luckily, at least, this notion of convergents, even if it may not always be the best way to do it, lends itself to an easy way to check a solution to any such problem. Just find a bunch of the convergents - take as many as you need. If you somehow have multiple solutions, as you think with Ramanujan's radical, then you'll see the convergents get closer and closer to the "true" solution.



(How many convergents you need to find depends on the situation and how close your proposed solutions are. It might be immediately obvious after $10$ iterations, or might not be until $10,000,000$. This logic also relies on the assumption that there is only one solution to a given expression that is valid. Depending on the context, you might see cases where multiple solutions are valid but this "approaching by hand" method will only get you some of the solutions. This touches on the notion of "unstable" and "stable" solutions to dynamical systems - which I believe is the main context where such would pop up - but it's a bit overkill to discuss that for this post.)



So I will conclude by showing, in this way, that the solution is $3$ to Ramanujan's radical.



We begin with the radical itself:



$$sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}=3$$



Let us begin by getting a series of convergents:



$$sqrt{1} ;;,;; sqrt{1 + 2sqrt{1}} ;;,;; sqrt{1 + 2sqrt{1 + 3sqrt{1}}} ;;,;;$$



Because the $sqrt{1}$ isn't necessary, we just let it be $1$.



$$1 ;;,;; sqrt{1 + 2} ;;,;; sqrt{1 + 2sqrt{1 + 3}} ;;,;;$$



Okay so ... where to go from here? Honestly, my initial temptation was to just use a MATLAB script and evaluate it, but I can't think of even a recursive closed form for this that would be nice enough. So in any event, we just have to go by "hand" (and by hand I mean WolframAlpha). Let $C(n)$ be the $n^{th}$ convergent. Then




  • $C(1) = 1$

  • $C(2) approx 1.732$

  • $C(3) approx 2.236$

  • $C(4) approx 2.560$

  • $C(5) approx 2.755$

  • $C(6) approx 2.867$

  • $C(7) approx 2.929$

  • $C(8) approx 2.963$

  • $C(9) approx 2.981$

  • $C(10) approx 2.990$


To skip a few values because at this point the changes get minimal, I used a macro to make a quick code for $C(50)$ so I could put it into Microsoft Excel and got the approximate result



$$C(50) approx 2.999 ; 999 ; 999 ; 999 ; 99$$



So while not the most rigorous result, we can at least on an intuitive level feel like the convergents from Ramanujan's radical converge to $3$, not $4$ or any other number. Neglecting that this is not an ironclad proof of the convergence, at least intuitively then we can feel like



$$3 = sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$





Whew! Hopefully that lengthy post was helpful to you!





A late footnote, but Mathologer on YouTube did a video on this very topic, so his video would give a decent summary of all this stuff as well. Here's a link.






share|cite|improve this answer











$endgroup$









  • 3




    $begingroup$
    On Mathematica you can do f[x_, n_] := Sqrt[1 + n * x]; ram[x0_, n_] := Fold[f, Sqrt[x0], Reverse @ Range[2, n]]; The results suggest that the radical approaches 3 regardless of the innermost initial value.
    $endgroup$
    – lastresort
    yesterday



















12












$begingroup$

The user @Eevee Trainer provided a nice explanation on how we define infinite nested radical in terms of limit of finite nested radical which should be insensitive of the starting point. For full generality in this regard, we can consider the convergence of the following finite nested radical



$$ sqrt{1 + 2 sqrt{1 + 3 sqrt{ 1 + cdots n sqrt{1 + (n+1) a_n}}}} tag{*} $$



for a given sequence $(a_n)$ of non-negative real numbers. In this answer, I will prove the convergence of $text{(*)}$ to $3$ under a mild condition. My solution will involve some preliminary knowledge on analysis.





Setting. We consider the map $Phi$, defined on the space of all functions from $[0,infty)$ to $[0, infty)$, which is given by



$$ Phi [f](x) = sqrt{1 + xf(x+1)}. $$



Let us check how $Phi$ is related to our problem. The idea is to apply the trick of computing Ramanujan's infinite nested radical not to a single number, but rather to a function. Here, we choose $F(x) = x+1$. Since



$$ F(x) = 1+x = sqrt{1+x(x+2)} = sqrt{1 + xF(x+1)} = Phi[F](x), $$



it follows that we can iterated $Phi$ several times to obtain



$$ F(x) = Phi^{circ n}[F](x) = sqrt{1 + xsqrt{1 + (x+1)sqrt{1 + cdots (x+n-1)sqrt{(x+n+1)^2}}}}, $$



where $Phi^{circ n} = Phi circ cdots circ Phi$ is the $n$-fold function composition of $Phi$. Of course, the original radical corresponds to the case $x = 2$. This already proves that $Phi^{circ n}[F](x)$ converges to $F(x)$ as $ntoinfty$.



On the other hand, infinite nested radicals do not have any designated value to start with, and so, the above computation is far from satisfactory when it comes to defining infinite nested radical. Thus, a form of robustness of the convergence is required. In this regard, @Eevee Trainer investigated the convergence of



$$Phi^{circ n}[1](2) = sqrt{1 + 2sqrt{1 + 3sqrt{1 + cdots (n+1)sqrt{1}}}}, $$



and confirmed numerically that this still tends to $F(2) = 3$ as $n$ grows. Of course, it will be ideal if we can verify the same conclusion for other choices starting points and using rigorous argument.



Proof. One nice thing about $Phi$ is that it enjoys monotone, i.e., if $f leq g$, then $Phi [f] leq Phi [g]$. From this, it is easy to establish the following observation.




Lemma 1. For any $f geq 0$, we have $liminf_{ntoinfty} Phi^{circ n}[f](x) geq x+1$.




Proof. We inductively apply the monotonicity of $Phi$ to find that



$$Phi^{circ (n+1)} [0](x) geq x^{1-2^{-n}} qquad text{and} qquad Phi^{circ n}[x](x) geq x + 1 - 2^{-n}. $$



So, for any integer $m geq 0$,



begin{align*}
liminf_{ntoinfty} Phi^{circ n}[f](x)
&geq liminf_{ntoinfty} Phi^{circ m}[Phi^{circ (n+1)}[0]](x)
geq liminf_{ntoinfty} Phi^{circ m}[x^{1-2^{-(n-1)}}](x) \
&= Phi^{circ m}[x](x)
geq x + 1 - 2^{-m},
end{align*}



and letting $m to infty$ proves Lemma 1 as required.




Lemma 2. If $limsup_{xtoinfty} frac{log log max{e, f(x)}}{x} < log 2$, then $limsup_{ntoinfty} Phi^{circ n}[f](x) leq x+1$.




Proof. By the assumption, there exists $C > 1$ and $alpha in [0, 2)$ such that $f(x) leq C e^{alpha^x} (x + 1)$. Again, applying monotonicity of $Phi$, we check that $Phi^{circ n}[f](x) leq C^{2^{-n}} e^{(alpha/2)^n alpha^x} (x+1)$ holds. This is certainly true if $n = 0$. Moreover, assuming that this is true for $n$,



begin{align*}
Phi^{circ (n+1)}[f](x)
&leq Phileft[C^{2^{-n}} e^{(alpha/2)^n alpha^x} (x+1)right](x) \
&= left( 1 + x C^{2^{-n}} e^{(alpha/2)^n alpha^{x+1}} (x+2) right)^{1/2} \
&leq C^{2^{-n-1}} e^{(alpha/2)^{n+1} alpha^{x}} (x+1).
end{align*}



Now letting $n to infty$ proves the desired result.




Corollary. If $a_n geq 0$ satisfies $limsup_{ntoinfty} frac{loglog max{e, a_n}}{n} < log 2$, then for any $x geq 0$,



$$ lim_{ntoinfty} sqrt{1 + x sqrt{1 + (x+1) sqrt{ 1 + cdots + (x+n-2) sqrt{1 + (x+n-1) a_n }}}} = x+1. $$




Proof. Apply Lemma 1 and 2 to the function $f$ which interpolates $(a_n)$, such as using piecewise linear interpolation.





The bound in Lemma 2 and Corollary is optimal. Indeed, consider the non-example in OP's question of expanding $4$ as in Ramanujan's infinite nested radical. So we define the sequence $a_n$ so as to satisfy



$$ sqrt{1 + 2 sqrt{1 + 3 sqrt{ 1 + cdots n sqrt{1 + (n+1) a_n}}}} = 4. $$



Its first 4 terms are given as follows.



$$ a_1 = frac{15}{2}, quad
a_2 = frac{221}{12}, quad
a_3 = frac{48697}{576}, quad
a_4 = frac{2371066033}{1658880}, quad cdots. $$



Then we can show that $frac{1}{n}log log a_n to log 2$ as $n to infty$.






share|cite|improve this answer











$endgroup$





















    7












    $begingroup$

    As others have said, the rigorous definition of an infinite expression comes from the limit of a sequence of finite terms. The terms need to be well-defined, but in practice, we just try to make sure the pattern is clear from context.



    Now let's see what goes wrong with your other example. You wrote:



    $$4 = sqrt{16}=sqrt{1+2sqrt{56.25}}=sqrt{1+2sqrt{1+3sqrt{frac{48841}{144}}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



    The problem is that each term in the sequence (e.g. if we stop at $4$) fails to include an "extra" amount (and this amount is not going to zero). So if we look at the partial sums, we see they won't converge to $4$ unless we include the extra amounts we keep pushing to the right. It's the same logical mistake as taking



    begin{align*}
    2 &= 1 + 1 \
    &= frac{1}{2} + frac{1}{2} + 1 \
    &= frac{1}{2} + frac{1}{4} + frac{1}{4} + 1 \
    &= frac{1}{2} + frac{1}{4} + frac{1}{8} + cdots + 1
    end{align*}

    And then saying, wait, the partial sums in the last line only converge to $1$ instead of $2$. But this is a mistake because we can't push the extra $1$ "infinitely far" right. Otherwise, the partial sum terms we write down will just look like $frac{1}{2} + frac{1}{4} + cdots$ and will never include the $1$. The same thing is happening (roughly) in your example.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      However, the terms in the argument for the value being 3 are 16, 25, 36, ... also not converging to 0.
      $endgroup$
      – M.Herzkamp
      yesterday










    • $begingroup$
      @M.Herzkamp, I've edited to avoid that example. To look at the extra terms rigorously, you really have to subtract the entire term in the sequence from the total, which doesn't have a simple closed form in the square root examples.
      $endgroup$
      – usul
      yesterday





















    3












    $begingroup$

    Your example doesn't quite capture the spirit of the Ramanujan expression: it arises from the succession of the integers, not from ad hoc computation to try to fit the expression to a value.



    Let's make a sketch of an induction proof. Take the base case $n=3$:
    $$sqrt{n^2}=3$$
    and, for the inductive step, take the radical at the "deepest" level of the chain and apply



    $$sqrt{k^2}=sqrt{1+(k-1)(k+1)}$$
    $$=sqrt{1+(k-1)sqrt{(k+1)^2}}$$



    this gives us exactly the sequence we need. This means we can take any $N>=3$ and construct the expression with $sqrt{N^2}$ in the deepest radical, and know that the equality with 3 has been preserved. (What does this tell us about the limit at infinity?)



    For the 4 example, there's no analogous step to get from one term of the sequence to the next; you have to compute the term at the lowest level by working through the whole chain each time.






    share|cite|improve this answer








    New contributor




    ripkoops is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    $endgroup$













    • $begingroup$
      Thanks for your help. However, gathering all the helps from the community so far, I would say I disagree with your points. In the case of '4' , we can definitely define a sequence of nested radicals(although the general term is super ugly) that is well-defined and converges to 4. The problem is the expression of the infinite nested radical alone is not telling us all the information. It is never properly defined by itself.
      $endgroup$
      – Anson NG
      19 hours ago










    • $begingroup$
      It true that your sequence converges to 4 (because that's how it's constructed), but in some sense it doesn't converge to the abstracted sequence in question here. Think about the expression sqrt(1+2*sqrt(1+3*sqrt(1+...))). What's the additive contribution of the "..." part (i.e. the difference between this and sqrt(1+2*sqrt(1+3*sqrt(1))))? We should expect it to approach 0 as we go deeper into the chain if we want to arrive at a limit, but in your derivation you're squaring it at each term of the sequence, which offsets the reduction you'd see from it being within another square root.
      $endgroup$
      – ripkoops
      10 hours ago



















    0












    $begingroup$

    $4=sqrt{16}=sqrt{1+3sqrt{25}}=sqrt{1+3sqrt{1+4sqrt{36}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{49}}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{64}}}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{81}}}}}}$



    $=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{1+cdots}}}}}}}$



    $5=sqrt{25}=sqrt{1+4sqrt{36}}=sqrt{1+4sqrt{1+5sqrt{49}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{64}}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{81}}}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{100}}}}}}$



    $=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{1+9sqrt{1+cdots}}}}}}}$



    $vdots$



    $n=sqrt{1+(n-1)sqrt{1+nsqrt{1+(n+1)sqrt{1+(n+2)sqrt{1+(n+3)sqrt{1+(n+4)sqrt{1+cdots}}}}}}}$






    share|cite|improve this answer











    $endgroup$









    • 2




      $begingroup$
      This does not answer the question.
      $endgroup$
      – Wojowu
      yesterday






    • 2




      $begingroup$
      @Wojowu: however, you have to admit that it is a nice piece of information, but too much for a comment. A little explanation might be beneficial though...
      $endgroup$
      – M.Herzkamp
      yesterday











    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3119631%2framanujans-radical-and-how-we-define-an-infinite-nested-radical%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    5 Answers
    5






    active

    oldest

    votes








    5 Answers
    5






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    44












    $begingroup$

    Introduction:



    The issue is what "..." really "represents."



    Typically we use it as a sort of shorthand, as if to say "look, I can't write infinitely many things down, just assume that the obvious pattern holds and goes on infinitely."



    This idea holds for all sorts of things - nested radicals, infinite sums, continued fractions, infinite sequences, etc.





    On Infinite Sums:



    A simple example: the sum of the reciprocals of squares:



    $$1 + frac{1}{4} + frac{1}{9} + frac{1}{16} + ...$$



    This is a well known summation. It is the Riemann zeta function $zeta(s)$ at $s=2$, and is known to evaluate to $pi^2/6$ (proved by Euler and known as the Basel problem).



    Another, easier-to-handle summation is the geometric sum



    $$1 + frac 1 2 + frac 1 4 + frac 1 8 + ...$$



    This is a geometric series where the ratio is $1/2$ - each summand is half the previous one. We know, too, that this evaluates to $2$.



    Another geometric series you might see in proofs that $0.999... = 1$ is



    $$9 left( frac{1}{10} + frac{1}{100} + frac{1}{1,000} + frac{1}{10,000} + ... right)$$



    which equals $1$. In fact, any infinite geometric series, with first term $a$ and ratio $|r|<1$ can be evaluated by



    $$sum_{n=0}^infty ar^n = frac{a}{1-r}$$



    So a question arises - ignoring these "obvious" results (depending on your amount of mathematical knowledge), how would we know these converge to the given values? What, exactly, does it mean for a summation to converge to a number or equal a number? For finite sums this is no issue - if nothing else, we could add up each number manually, but we can't just add up every number from a set of infinitely-many numbers.



    Well, one could argue by common sense that, if the sequence converges to some number, the more and more terms you add up, the closer they'll get to that number.



    So we obtain one definition for the convergence of an infinite sum. Consider a sequence where the $n^{th}$ term is defined by the sum of the first $n$ terms in the sequence. To introduce some symbols, suppose we're trying to find the sum



    $$sum_{k=1}^infty x_k = x_1 + x_2 + x_3 + x_4 + ...$$



    for whatever these $x_i$'s are. Then define these so-called "partial sums" of this by a function $S(n)$:



    $$S(n) = sum_{k=1}^n x_k = x_1 + x_2 + ... + x_n$$



    Then we get a sequence of sums:



    $$S(1), S(2), S(3), ...$$



    or equivalently



    $$x_1 ;;,;; x_1 + x_2;;,;; x_1 + x_2 + x_3;;,;; ...$$



    Then we ask: what does $S(n)$ approach as $n$ grows without bound, if anything at all? (In calculus, we call this "the limit of the partial sums $S(n)$ as $n$ approaches infinity.")



    For the case of our first geometric sum, we immediately see the sequence of partial sums



    $$1, frac{3}{2}, frac{7}{4}, frac{15}{8},...$$



    Clearly, this suggests a pattern - and if you want to, you can go ahead and prove it, I won't do so here for brevity's sake. The pattern is that the $n^{th}$ term of the sequence is



    $$S(n) = frac{2^{(n+1)}-1}{2^{n}}$$



    We can then easily consider the limit of these partial sums:



    $$lim_{ntoinfty} S(n) = lim_{ntoinfty} frac{2^{(n+1)}-1}{2^{n}} = lim_{ntoinfty} frac{2^{(n+1)}}{2^{n}} - frac {1}{2^{n}} = lim_{ntoinfty} 2 - lim_{ntoinfty} frac{1}{2^{n}}$$



    Obviously, $1/2^{n} to 0$ as $n$ grows without bound, and $2$ is not affected by $n$, so we conclude



    $$lim_{ntoinfty} S(n) = lim_{ntoinfty} 2 - lim_{ntoinfty} frac{1}{2^n} = 2 - 0 = 2$$



    And thus we say



    $$sum_{k=0}^infty left(frac 1 2 right)^k = 1 + frac 1 2 + frac 1 4 + frac 1 8 + ... = 2$$



    because the partial sums approach $2$.





    On Continued Fractions:



    That was a simple, "first" sort of example, but mathematicians essentially do the same thing in other contexts. I want to touch on one more such context before we deal with the radical case, just to nail that point home.



    In this case, it will be with continued fractions. One of the simpler such fractions is the one for $1$:



    $$1 = frac{1}{2-frac{1}{2-frac{1}{2-frac{1}{...}}}}$$



    As usual, the "..." denotes that this continues forever. But what it does it mean for this infinite expression to equal $1$?



    For this, we consider a more general analogue of the "partial sum" from before - a "convergent." We cut up the sequence at logical finite points, whatever those points being depending on the context. Then if the sequence of the convergents approaches a limit, we say they're equal.



    What are the convergents for a continued fraction? By convention, we cut off just before the start of the next fraction. That is, in the continued fraction for $1$, we cut off at the $n^{th} ; 2$ for the $n^{th}$ convergent, and ignore what follows. So we get the sequence of convergents



    $$frac{1}{2} , frac{1}{2-frac{1}{2}}, frac{1}{2-frac{1}{2-frac{1}{2}}},...$$



    Working out the numbers, we find the sequence to be



    $$frac{1}{2},frac{2}{3},frac{3}{4},...$$



    Again, we see a pattern! The $n^{th}$ term of the sequence is clearly of the form



    $$frac{n-1}{n}$$



    Let $C(n)$ be a function denoting the $n^{th}$ convergent. Then $C(1)=1/2,$ $C(2) = 2/3,$ $C(n)=(n-1)/n,$ and so on. So like before we consider the infinite limit:



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} frac{n-1}{n} = lim_{ntoinfty} 1 - frac 1 n = lim_{ntoinfty} 1 - lim_{ntoinfty} frac 1 n = 1 - 0 = 1$$



    Thus we can conclude that the continued fraction equals $1$, because its sequence of convergents equals $1$!





    On Infinite Radicals:



    So now, we touch on infinite nested radicals. They're messier to deal with but doable.



    One of the simpler examples of such radicals to contend with is



    $$2 = sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +...}}}}}$$



    As with the previous two cases we see an infinite expression. We instinctively conclude by now: to logically define a limit for this expression - to assign it a value provided it even exists - we need to chop this up at finite points, defining a sequence of convergents $C(n)$, and then find $C(n)$ as $ntoinfty$.



    Nested radicals are a lot messier than the previous, but we manage.



    So first let the sequence of convergents be given by cutting off everything after the $n^{th} ; 2$ in the expression. Thus we get the sequence



    $$sqrt 2 ;;,;; sqrt{2 + sqrt{2}};;,;; sqrt{2+sqrt{2+sqrt{2}}};;,;; sqrt{2+sqrt{2+sqrt{2+sqrt{2}}}}$$



    Okay this isn't particularly nice already, but apparently there does exist, shockingly enough, a closed-form explicit expression for $C(n)$: (from: S. Zimmerman, C. Ho)



    $$C(n) = 2cosleft(frac{pi}{2^{n+1}}right)$$



    (I had to find that expression by Googling, I honestly didn't know that offhand. It can be proved by induction, as touched on in this MSE question.)



    So luckily, then, we can find the limit of $C(n)$:



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} 2cosleft(frac{pi}{2^{n+1}}right)$$



    It is probably obvious that the argument of the cosine function approaches $0$ as $n$ grows without bound, and thus



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} 2cosleft(frac{pi}{2^{n+1}}right) = 2cos(0) = 2cdot 1 = 2$$



    Thus, since its convergents approach $2$, we can conclude that



    $$2 = sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +...}}}}}$$





    A Lengthy Conclusion:



    So, in short, how do we evaluate an infinite expression, be it radical, continued fraction, sum, or otherwise?



    We begin by truncating the expression at convenient finite places, creating a series of convergents, generalizations of the "partial sums" introduced in calculus. We then try to get a closed form or some other usable expression for the convergents $C(n)$, and consider the value as $ntoinfty$. If it converges to some value, we say that the expression is in fact equal to that value. If it doesn't, then the expression doesn't converge to any value.



    This doesn't mean each expression is "nice." Radical expressions in particular, in my experience, tend to be nasty as all hell, and I'm lucky I found that one closed form expression for the particularly easy radical I chose.



    This doesn't mean that other methods cannot be used to find the values, so long as there's some sort of logical justification for said method. For example, there is a justification for the formula for an infinite (and finite) geometric sum. We might have to circumvent the notion of partial sums entirely, or at least it might be convenient to do so. For example, with the Basel problem, Euler's proof focused on Maclaurin series, and none of this "convergent" stuff. (That proof is here plus other proofs of it!)



    Luckily, at least, this notion of convergents, even if it may not always be the best way to do it, lends itself to an easy way to check a solution to any such problem. Just find a bunch of the convergents - take as many as you need. If you somehow have multiple solutions, as you think with Ramanujan's radical, then you'll see the convergents get closer and closer to the "true" solution.



    (How many convergents you need to find depends on the situation and how close your proposed solutions are. It might be immediately obvious after $10$ iterations, or might not be until $10,000,000$. This logic also relies on the assumption that there is only one solution to a given expression that is valid. Depending on the context, you might see cases where multiple solutions are valid but this "approaching by hand" method will only get you some of the solutions. This touches on the notion of "unstable" and "stable" solutions to dynamical systems - which I believe is the main context where such would pop up - but it's a bit overkill to discuss that for this post.)



    So I will conclude by showing, in this way, that the solution is $3$ to Ramanujan's radical.



    We begin with the radical itself:



    $$sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}=3$$



    Let us begin by getting a series of convergents:



    $$sqrt{1} ;;,;; sqrt{1 + 2sqrt{1}} ;;,;; sqrt{1 + 2sqrt{1 + 3sqrt{1}}} ;;,;;$$



    Because the $sqrt{1}$ isn't necessary, we just let it be $1$.



    $$1 ;;,;; sqrt{1 + 2} ;;,;; sqrt{1 + 2sqrt{1 + 3}} ;;,;;$$



    Okay so ... where to go from here? Honestly, my initial temptation was to just use a MATLAB script and evaluate it, but I can't think of even a recursive closed form for this that would be nice enough. So in any event, we just have to go by "hand" (and by hand I mean WolframAlpha). Let $C(n)$ be the $n^{th}$ convergent. Then




    • $C(1) = 1$

    • $C(2) approx 1.732$

    • $C(3) approx 2.236$

    • $C(4) approx 2.560$

    • $C(5) approx 2.755$

    • $C(6) approx 2.867$

    • $C(7) approx 2.929$

    • $C(8) approx 2.963$

    • $C(9) approx 2.981$

    • $C(10) approx 2.990$


    To skip a few values because at this point the changes get minimal, I used a macro to make a quick code for $C(50)$ so I could put it into Microsoft Excel and got the approximate result



    $$C(50) approx 2.999 ; 999 ; 999 ; 999 ; 99$$



    So while not the most rigorous result, we can at least on an intuitive level feel like the convergents from Ramanujan's radical converge to $3$, not $4$ or any other number. Neglecting that this is not an ironclad proof of the convergence, at least intuitively then we can feel like



    $$3 = sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$





    Whew! Hopefully that lengthy post was helpful to you!





    A late footnote, but Mathologer on YouTube did a video on this very topic, so his video would give a decent summary of all this stuff as well. Here's a link.






    share|cite|improve this answer











    $endgroup$









    • 3




      $begingroup$
      On Mathematica you can do f[x_, n_] := Sqrt[1 + n * x]; ram[x0_, n_] := Fold[f, Sqrt[x0], Reverse @ Range[2, n]]; The results suggest that the radical approaches 3 regardless of the innermost initial value.
      $endgroup$
      – lastresort
      yesterday
















    44












    $begingroup$

    Introduction:



    The issue is what "..." really "represents."



    Typically we use it as a sort of shorthand, as if to say "look, I can't write infinitely many things down, just assume that the obvious pattern holds and goes on infinitely."



    This idea holds for all sorts of things - nested radicals, infinite sums, continued fractions, infinite sequences, etc.





    On Infinite Sums:



    A simple example: the sum of the reciprocals of squares:



    $$1 + frac{1}{4} + frac{1}{9} + frac{1}{16} + ...$$



    This is a well known summation. It is the Riemann zeta function $zeta(s)$ at $s=2$, and is known to evaluate to $pi^2/6$ (proved by Euler and known as the Basel problem).



    Another, easier-to-handle summation is the geometric sum



    $$1 + frac 1 2 + frac 1 4 + frac 1 8 + ...$$



    This is a geometric series where the ratio is $1/2$ - each summand is half the previous one. We know, too, that this evaluates to $2$.



    Another geometric series you might see in proofs that $0.999... = 1$ is



    $$9 left( frac{1}{10} + frac{1}{100} + frac{1}{1,000} + frac{1}{10,000} + ... right)$$



    which equals $1$. In fact, any infinite geometric series, with first term $a$ and ratio $|r|<1$ can be evaluated by



    $$sum_{n=0}^infty ar^n = frac{a}{1-r}$$



    So a question arises - ignoring these "obvious" results (depending on your amount of mathematical knowledge), how would we know these converge to the given values? What, exactly, does it mean for a summation to converge to a number or equal a number? For finite sums this is no issue - if nothing else, we could add up each number manually, but we can't just add up every number from a set of infinitely-many numbers.



    Well, one could argue by common sense that, if the sequence converges to some number, the more and more terms you add up, the closer they'll get to that number.



    So we obtain one definition for the convergence of an infinite sum. Consider a sequence where the $n^{th}$ term is defined by the sum of the first $n$ terms in the sequence. To introduce some symbols, suppose we're trying to find the sum



    $$sum_{k=1}^infty x_k = x_1 + x_2 + x_3 + x_4 + ...$$



    for whatever these $x_i$'s are. Then define these so-called "partial sums" of this by a function $S(n)$:



    $$S(n) = sum_{k=1}^n x_k = x_1 + x_2 + ... + x_n$$



    Then we get a sequence of sums:



    $$S(1), S(2), S(3), ...$$



    or equivalently



    $$x_1 ;;,;; x_1 + x_2;;,;; x_1 + x_2 + x_3;;,;; ...$$



    Then we ask: what does $S(n)$ approach as $n$ grows without bound, if anything at all? (In calculus, we call this "the limit of the partial sums $S(n)$ as $n$ approaches infinity.")



    For the case of our first geometric sum, we immediately see the sequence of partial sums



    $$1, frac{3}{2}, frac{7}{4}, frac{15}{8},...$$



    Clearly, this suggests a pattern - and if you want to, you can go ahead and prove it, I won't do so here for brevity's sake. The pattern is that the $n^{th}$ term of the sequence is



    $$S(n) = frac{2^{(n+1)}-1}{2^{n}}$$



    We can then easily consider the limit of these partial sums:



    $$lim_{ntoinfty} S(n) = lim_{ntoinfty} frac{2^{(n+1)}-1}{2^{n}} = lim_{ntoinfty} frac{2^{(n+1)}}{2^{n}} - frac {1}{2^{n}} = lim_{ntoinfty} 2 - lim_{ntoinfty} frac{1}{2^{n}}$$



    Obviously, $1/2^{n} to 0$ as $n$ grows without bound, and $2$ is not affected by $n$, so we conclude



    $$lim_{ntoinfty} S(n) = lim_{ntoinfty} 2 - lim_{ntoinfty} frac{1}{2^n} = 2 - 0 = 2$$



    And thus we say



    $$sum_{k=0}^infty left(frac 1 2 right)^k = 1 + frac 1 2 + frac 1 4 + frac 1 8 + ... = 2$$



    because the partial sums approach $2$.





    On Continued Fractions:



    That was a simple, "first" sort of example, but mathematicians essentially do the same thing in other contexts. I want to touch on one more such context before we deal with the radical case, just to nail that point home.



    In this case, it will be with continued fractions. One of the simpler such fractions is the one for $1$:



    $$1 = frac{1}{2-frac{1}{2-frac{1}{2-frac{1}{...}}}}$$



    As usual, the "..." denotes that this continues forever. But what it does it mean for this infinite expression to equal $1$?



    For this, we consider a more general analogue of the "partial sum" from before - a "convergent." We cut up the sequence at logical finite points, whatever those points being depending on the context. Then if the sequence of the convergents approaches a limit, we say they're equal.



    What are the convergents for a continued fraction? By convention, we cut off just before the start of the next fraction. That is, in the continued fraction for $1$, we cut off at the $n^{th} ; 2$ for the $n^{th}$ convergent, and ignore what follows. So we get the sequence of convergents



    $$frac{1}{2} , frac{1}{2-frac{1}{2}}, frac{1}{2-frac{1}{2-frac{1}{2}}},...$$



    Working out the numbers, we find the sequence to be



    $$frac{1}{2},frac{2}{3},frac{3}{4},...$$



    Again, we see a pattern! The $n^{th}$ term of the sequence is clearly of the form



    $$frac{n-1}{n}$$



    Let $C(n)$ be a function denoting the $n^{th}$ convergent. Then $C(1)=1/2,$ $C(2) = 2/3,$ $C(n)=(n-1)/n,$ and so on. So like before we consider the infinite limit:



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} frac{n-1}{n} = lim_{ntoinfty} 1 - frac 1 n = lim_{ntoinfty} 1 - lim_{ntoinfty} frac 1 n = 1 - 0 = 1$$



    Thus we can conclude that the continued fraction equals $1$, because its sequence of convergents equals $1$!





    On Infinite Radicals:



    So now, we touch on infinite nested radicals. They're messier to deal with but doable.



    One of the simpler examples of such radicals to contend with is



    $$2 = sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +...}}}}}$$



    As with the previous two cases we see an infinite expression. We instinctively conclude by now: to logically define a limit for this expression - to assign it a value provided it even exists - we need to chop this up at finite points, defining a sequence of convergents $C(n)$, and then find $C(n)$ as $ntoinfty$.



    Nested radicals are a lot messier than the previous, but we manage.



    So first let the sequence of convergents be given by cutting off everything after the $n^{th} ; 2$ in the expression. Thus we get the sequence



    $$sqrt 2 ;;,;; sqrt{2 + sqrt{2}};;,;; sqrt{2+sqrt{2+sqrt{2}}};;,;; sqrt{2+sqrt{2+sqrt{2+sqrt{2}}}}$$



    Okay this isn't particularly nice already, but apparently there does exist, shockingly enough, a closed-form explicit expression for $C(n)$: (from: S. Zimmerman, C. Ho)



    $$C(n) = 2cosleft(frac{pi}{2^{n+1}}right)$$



    (I had to find that expression by Googling, I honestly didn't know that offhand. It can be proved by induction, as touched on in this MSE question.)



    So luckily, then, we can find the limit of $C(n)$:



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} 2cosleft(frac{pi}{2^{n+1}}right)$$



    It is probably obvious that the argument of the cosine function approaches $0$ as $n$ grows without bound, and thus



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} 2cosleft(frac{pi}{2^{n+1}}right) = 2cos(0) = 2cdot 1 = 2$$



    Thus, since its convergents approach $2$, we can conclude that



    $$2 = sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +...}}}}}$$





    A Lengthy Conclusion:



    So, in short, how do we evaluate an infinite expression, be it radical, continued fraction, sum, or otherwise?



    We begin by truncating the expression at convenient finite places, creating a series of convergents, generalizations of the "partial sums" introduced in calculus. We then try to get a closed form or some other usable expression for the convergents $C(n)$, and consider the value as $ntoinfty$. If it converges to some value, we say that the expression is in fact equal to that value. If it doesn't, then the expression doesn't converge to any value.



    This doesn't mean each expression is "nice." Radical expressions in particular, in my experience, tend to be nasty as all hell, and I'm lucky I found that one closed form expression for the particularly easy radical I chose.



    This doesn't mean that other methods cannot be used to find the values, so long as there's some sort of logical justification for said method. For example, there is a justification for the formula for an infinite (and finite) geometric sum. We might have to circumvent the notion of partial sums entirely, or at least it might be convenient to do so. For example, with the Basel problem, Euler's proof focused on Maclaurin series, and none of this "convergent" stuff. (That proof is here plus other proofs of it!)



    Luckily, at least, this notion of convergents, even if it may not always be the best way to do it, lends itself to an easy way to check a solution to any such problem. Just find a bunch of the convergents - take as many as you need. If you somehow have multiple solutions, as you think with Ramanujan's radical, then you'll see the convergents get closer and closer to the "true" solution.



    (How many convergents you need to find depends on the situation and how close your proposed solutions are. It might be immediately obvious after $10$ iterations, or might not be until $10,000,000$. This logic also relies on the assumption that there is only one solution to a given expression that is valid. Depending on the context, you might see cases where multiple solutions are valid but this "approaching by hand" method will only get you some of the solutions. This touches on the notion of "unstable" and "stable" solutions to dynamical systems - which I believe is the main context where such would pop up - but it's a bit overkill to discuss that for this post.)



    So I will conclude by showing, in this way, that the solution is $3$ to Ramanujan's radical.



    We begin with the radical itself:



    $$sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}=3$$



    Let us begin by getting a series of convergents:



    $$sqrt{1} ;;,;; sqrt{1 + 2sqrt{1}} ;;,;; sqrt{1 + 2sqrt{1 + 3sqrt{1}}} ;;,;;$$



    Because the $sqrt{1}$ isn't necessary, we just let it be $1$.



    $$1 ;;,;; sqrt{1 + 2} ;;,;; sqrt{1 + 2sqrt{1 + 3}} ;;,;;$$



    Okay so ... where to go from here? Honestly, my initial temptation was to just use a MATLAB script and evaluate it, but I can't think of even a recursive closed form for this that would be nice enough. So in any event, we just have to go by "hand" (and by hand I mean WolframAlpha). Let $C(n)$ be the $n^{th}$ convergent. Then




    • $C(1) = 1$

    • $C(2) approx 1.732$

    • $C(3) approx 2.236$

    • $C(4) approx 2.560$

    • $C(5) approx 2.755$

    • $C(6) approx 2.867$

    • $C(7) approx 2.929$

    • $C(8) approx 2.963$

    • $C(9) approx 2.981$

    • $C(10) approx 2.990$


    To skip a few values because at this point the changes get minimal, I used a macro to make a quick code for $C(50)$ so I could put it into Microsoft Excel and got the approximate result



    $$C(50) approx 2.999 ; 999 ; 999 ; 999 ; 99$$



    So while not the most rigorous result, we can at least on an intuitive level feel like the convergents from Ramanujan's radical converge to $3$, not $4$ or any other number. Neglecting that this is not an ironclad proof of the convergence, at least intuitively then we can feel like



    $$3 = sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$





    Whew! Hopefully that lengthy post was helpful to you!





    A late footnote, but Mathologer on YouTube did a video on this very topic, so his video would give a decent summary of all this stuff as well. Here's a link.






    share|cite|improve this answer











    $endgroup$









    • 3




      $begingroup$
      On Mathematica you can do f[x_, n_] := Sqrt[1 + n * x]; ram[x0_, n_] := Fold[f, Sqrt[x0], Reverse @ Range[2, n]]; The results suggest that the radical approaches 3 regardless of the innermost initial value.
      $endgroup$
      – lastresort
      yesterday














    44












    44








    44





    $begingroup$

    Introduction:



    The issue is what "..." really "represents."



    Typically we use it as a sort of shorthand, as if to say "look, I can't write infinitely many things down, just assume that the obvious pattern holds and goes on infinitely."



    This idea holds for all sorts of things - nested radicals, infinite sums, continued fractions, infinite sequences, etc.





    On Infinite Sums:



    A simple example: the sum of the reciprocals of squares:



    $$1 + frac{1}{4} + frac{1}{9} + frac{1}{16} + ...$$



    This is a well known summation. It is the Riemann zeta function $zeta(s)$ at $s=2$, and is known to evaluate to $pi^2/6$ (proved by Euler and known as the Basel problem).



    Another, easier-to-handle summation is the geometric sum



    $$1 + frac 1 2 + frac 1 4 + frac 1 8 + ...$$



    This is a geometric series where the ratio is $1/2$ - each summand is half the previous one. We know, too, that this evaluates to $2$.



    Another geometric series you might see in proofs that $0.999... = 1$ is



    $$9 left( frac{1}{10} + frac{1}{100} + frac{1}{1,000} + frac{1}{10,000} + ... right)$$



    which equals $1$. In fact, any infinite geometric series, with first term $a$ and ratio $|r|<1$ can be evaluated by



    $$sum_{n=0}^infty ar^n = frac{a}{1-r}$$



    So a question arises - ignoring these "obvious" results (depending on your amount of mathematical knowledge), how would we know these converge to the given values? What, exactly, does it mean for a summation to converge to a number or equal a number? For finite sums this is no issue - if nothing else, we could add up each number manually, but we can't just add up every number from a set of infinitely-many numbers.



    Well, one could argue by common sense that, if the sequence converges to some number, the more and more terms you add up, the closer they'll get to that number.



    So we obtain one definition for the convergence of an infinite sum. Consider a sequence where the $n^{th}$ term is defined by the sum of the first $n$ terms in the sequence. To introduce some symbols, suppose we're trying to find the sum



    $$sum_{k=1}^infty x_k = x_1 + x_2 + x_3 + x_4 + ...$$



    for whatever these $x_i$'s are. Then define these so-called "partial sums" of this by a function $S(n)$:



    $$S(n) = sum_{k=1}^n x_k = x_1 + x_2 + ... + x_n$$



    Then we get a sequence of sums:



    $$S(1), S(2), S(3), ...$$



    or equivalently



    $$x_1 ;;,;; x_1 + x_2;;,;; x_1 + x_2 + x_3;;,;; ...$$



    Then we ask: what does $S(n)$ approach as $n$ grows without bound, if anything at all? (In calculus, we call this "the limit of the partial sums $S(n)$ as $n$ approaches infinity.")



    For the case of our first geometric sum, we immediately see the sequence of partial sums



    $$1, frac{3}{2}, frac{7}{4}, frac{15}{8},...$$



    Clearly, this suggests a pattern - and if you want to, you can go ahead and prove it, I won't do so here for brevity's sake. The pattern is that the $n^{th}$ term of the sequence is



    $$S(n) = frac{2^{(n+1)}-1}{2^{n}}$$



    We can then easily consider the limit of these partial sums:



    $$lim_{ntoinfty} S(n) = lim_{ntoinfty} frac{2^{(n+1)}-1}{2^{n}} = lim_{ntoinfty} frac{2^{(n+1)}}{2^{n}} - frac {1}{2^{n}} = lim_{ntoinfty} 2 - lim_{ntoinfty} frac{1}{2^{n}}$$



    Obviously, $1/2^{n} to 0$ as $n$ grows without bound, and $2$ is not affected by $n$, so we conclude



    $$lim_{ntoinfty} S(n) = lim_{ntoinfty} 2 - lim_{ntoinfty} frac{1}{2^n} = 2 - 0 = 2$$



    And thus we say



    $$sum_{k=0}^infty left(frac 1 2 right)^k = 1 + frac 1 2 + frac 1 4 + frac 1 8 + ... = 2$$



    because the partial sums approach $2$.





    On Continued Fractions:



    That was a simple, "first" sort of example, but mathematicians essentially do the same thing in other contexts. I want to touch on one more such context before we deal with the radical case, just to nail that point home.



    In this case, it will be with continued fractions. One of the simpler such fractions is the one for $1$:



    $$1 = frac{1}{2-frac{1}{2-frac{1}{2-frac{1}{...}}}}$$



    As usual, the "..." denotes that this continues forever. But what it does it mean for this infinite expression to equal $1$?



    For this, we consider a more general analogue of the "partial sum" from before - a "convergent." We cut up the sequence at logical finite points, whatever those points being depending on the context. Then if the sequence of the convergents approaches a limit, we say they're equal.



    What are the convergents for a continued fraction? By convention, we cut off just before the start of the next fraction. That is, in the continued fraction for $1$, we cut off at the $n^{th} ; 2$ for the $n^{th}$ convergent, and ignore what follows. So we get the sequence of convergents



    $$frac{1}{2} , frac{1}{2-frac{1}{2}}, frac{1}{2-frac{1}{2-frac{1}{2}}},...$$



    Working out the numbers, we find the sequence to be



    $$frac{1}{2},frac{2}{3},frac{3}{4},...$$



    Again, we see a pattern! The $n^{th}$ term of the sequence is clearly of the form



    $$frac{n-1}{n}$$



    Let $C(n)$ be a function denoting the $n^{th}$ convergent. Then $C(1)=1/2,$ $C(2) = 2/3,$ $C(n)=(n-1)/n,$ and so on. So like before we consider the infinite limit:



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} frac{n-1}{n} = lim_{ntoinfty} 1 - frac 1 n = lim_{ntoinfty} 1 - lim_{ntoinfty} frac 1 n = 1 - 0 = 1$$



    Thus we can conclude that the continued fraction equals $1$, because its sequence of convergents equals $1$!





    On Infinite Radicals:



    So now, we touch on infinite nested radicals. They're messier to deal with but doable.



    One of the simpler examples of such radicals to contend with is



    $$2 = sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +...}}}}}$$



    As with the previous two cases we see an infinite expression. We instinctively conclude by now: to logically define a limit for this expression - to assign it a value provided it even exists - we need to chop this up at finite points, defining a sequence of convergents $C(n)$, and then find $C(n)$ as $ntoinfty$.



    Nested radicals are a lot messier than the previous, but we manage.



    So first let the sequence of convergents be given by cutting off everything after the $n^{th} ; 2$ in the expression. Thus we get the sequence



    $$sqrt 2 ;;,;; sqrt{2 + sqrt{2}};;,;; sqrt{2+sqrt{2+sqrt{2}}};;,;; sqrt{2+sqrt{2+sqrt{2+sqrt{2}}}}$$



    Okay this isn't particularly nice already, but apparently there does exist, shockingly enough, a closed-form explicit expression for $C(n)$: (from: S. Zimmerman, C. Ho)



    $$C(n) = 2cosleft(frac{pi}{2^{n+1}}right)$$



    (I had to find that expression by Googling, I honestly didn't know that offhand. It can be proved by induction, as touched on in this MSE question.)



    So luckily, then, we can find the limit of $C(n)$:



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} 2cosleft(frac{pi}{2^{n+1}}right)$$



    It is probably obvious that the argument of the cosine function approaches $0$ as $n$ grows without bound, and thus



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} 2cosleft(frac{pi}{2^{n+1}}right) = 2cos(0) = 2cdot 1 = 2$$



    Thus, since its convergents approach $2$, we can conclude that



    $$2 = sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +...}}}}}$$





    A Lengthy Conclusion:



    So, in short, how do we evaluate an infinite expression, be it radical, continued fraction, sum, or otherwise?



    We begin by truncating the expression at convenient finite places, creating a series of convergents, generalizations of the "partial sums" introduced in calculus. We then try to get a closed form or some other usable expression for the convergents $C(n)$, and consider the value as $ntoinfty$. If it converges to some value, we say that the expression is in fact equal to that value. If it doesn't, then the expression doesn't converge to any value.



    This doesn't mean each expression is "nice." Radical expressions in particular, in my experience, tend to be nasty as all hell, and I'm lucky I found that one closed form expression for the particularly easy radical I chose.



    This doesn't mean that other methods cannot be used to find the values, so long as there's some sort of logical justification for said method. For example, there is a justification for the formula for an infinite (and finite) geometric sum. We might have to circumvent the notion of partial sums entirely, or at least it might be convenient to do so. For example, with the Basel problem, Euler's proof focused on Maclaurin series, and none of this "convergent" stuff. (That proof is here plus other proofs of it!)



    Luckily, at least, this notion of convergents, even if it may not always be the best way to do it, lends itself to an easy way to check a solution to any such problem. Just find a bunch of the convergents - take as many as you need. If you somehow have multiple solutions, as you think with Ramanujan's radical, then you'll see the convergents get closer and closer to the "true" solution.



    (How many convergents you need to find depends on the situation and how close your proposed solutions are. It might be immediately obvious after $10$ iterations, or might not be until $10,000,000$. This logic also relies on the assumption that there is only one solution to a given expression that is valid. Depending on the context, you might see cases where multiple solutions are valid but this "approaching by hand" method will only get you some of the solutions. This touches on the notion of "unstable" and "stable" solutions to dynamical systems - which I believe is the main context where such would pop up - but it's a bit overkill to discuss that for this post.)



    So I will conclude by showing, in this way, that the solution is $3$ to Ramanujan's radical.



    We begin with the radical itself:



    $$sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}=3$$



    Let us begin by getting a series of convergents:



    $$sqrt{1} ;;,;; sqrt{1 + 2sqrt{1}} ;;,;; sqrt{1 + 2sqrt{1 + 3sqrt{1}}} ;;,;;$$



    Because the $sqrt{1}$ isn't necessary, we just let it be $1$.



    $$1 ;;,;; sqrt{1 + 2} ;;,;; sqrt{1 + 2sqrt{1 + 3}} ;;,;;$$



    Okay so ... where to go from here? Honestly, my initial temptation was to just use a MATLAB script and evaluate it, but I can't think of even a recursive closed form for this that would be nice enough. So in any event, we just have to go by "hand" (and by hand I mean WolframAlpha). Let $C(n)$ be the $n^{th}$ convergent. Then




    • $C(1) = 1$

    • $C(2) approx 1.732$

    • $C(3) approx 2.236$

    • $C(4) approx 2.560$

    • $C(5) approx 2.755$

    • $C(6) approx 2.867$

    • $C(7) approx 2.929$

    • $C(8) approx 2.963$

    • $C(9) approx 2.981$

    • $C(10) approx 2.990$


    To skip a few values because at this point the changes get minimal, I used a macro to make a quick code for $C(50)$ so I could put it into Microsoft Excel and got the approximate result



    $$C(50) approx 2.999 ; 999 ; 999 ; 999 ; 99$$



    So while not the most rigorous result, we can at least on an intuitive level feel like the convergents from Ramanujan's radical converge to $3$, not $4$ or any other number. Neglecting that this is not an ironclad proof of the convergence, at least intuitively then we can feel like



    $$3 = sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$





    Whew! Hopefully that lengthy post was helpful to you!





    A late footnote, but Mathologer on YouTube did a video on this very topic, so his video would give a decent summary of all this stuff as well. Here's a link.






    share|cite|improve this answer











    $endgroup$



    Introduction:



    The issue is what "..." really "represents."



    Typically we use it as a sort of shorthand, as if to say "look, I can't write infinitely many things down, just assume that the obvious pattern holds and goes on infinitely."



    This idea holds for all sorts of things - nested radicals, infinite sums, continued fractions, infinite sequences, etc.





    On Infinite Sums:



    A simple example: the sum of the reciprocals of squares:



    $$1 + frac{1}{4} + frac{1}{9} + frac{1}{16} + ...$$



    This is a well known summation. It is the Riemann zeta function $zeta(s)$ at $s=2$, and is known to evaluate to $pi^2/6$ (proved by Euler and known as the Basel problem).



    Another, easier-to-handle summation is the geometric sum



    $$1 + frac 1 2 + frac 1 4 + frac 1 8 + ...$$



    This is a geometric series where the ratio is $1/2$ - each summand is half the previous one. We know, too, that this evaluates to $2$.



    Another geometric series you might see in proofs that $0.999... = 1$ is



    $$9 left( frac{1}{10} + frac{1}{100} + frac{1}{1,000} + frac{1}{10,000} + ... right)$$



    which equals $1$. In fact, any infinite geometric series, with first term $a$ and ratio $|r|<1$ can be evaluated by



    $$sum_{n=0}^infty ar^n = frac{a}{1-r}$$



    So a question arises - ignoring these "obvious" results (depending on your amount of mathematical knowledge), how would we know these converge to the given values? What, exactly, does it mean for a summation to converge to a number or equal a number? For finite sums this is no issue - if nothing else, we could add up each number manually, but we can't just add up every number from a set of infinitely-many numbers.



    Well, one could argue by common sense that, if the sequence converges to some number, the more and more terms you add up, the closer they'll get to that number.



    So we obtain one definition for the convergence of an infinite sum. Consider a sequence where the $n^{th}$ term is defined by the sum of the first $n$ terms in the sequence. To introduce some symbols, suppose we're trying to find the sum



    $$sum_{k=1}^infty x_k = x_1 + x_2 + x_3 + x_4 + ...$$



    for whatever these $x_i$'s are. Then define these so-called "partial sums" of this by a function $S(n)$:



    $$S(n) = sum_{k=1}^n x_k = x_1 + x_2 + ... + x_n$$



    Then we get a sequence of sums:



    $$S(1), S(2), S(3), ...$$



    or equivalently



    $$x_1 ;;,;; x_1 + x_2;;,;; x_1 + x_2 + x_3;;,;; ...$$



    Then we ask: what does $S(n)$ approach as $n$ grows without bound, if anything at all? (In calculus, we call this "the limit of the partial sums $S(n)$ as $n$ approaches infinity.")



    For the case of our first geometric sum, we immediately see the sequence of partial sums



    $$1, frac{3}{2}, frac{7}{4}, frac{15}{8},...$$



    Clearly, this suggests a pattern - and if you want to, you can go ahead and prove it, I won't do so here for brevity's sake. The pattern is that the $n^{th}$ term of the sequence is



    $$S(n) = frac{2^{(n+1)}-1}{2^{n}}$$



    We can then easily consider the limit of these partial sums:



    $$lim_{ntoinfty} S(n) = lim_{ntoinfty} frac{2^{(n+1)}-1}{2^{n}} = lim_{ntoinfty} frac{2^{(n+1)}}{2^{n}} - frac {1}{2^{n}} = lim_{ntoinfty} 2 - lim_{ntoinfty} frac{1}{2^{n}}$$



    Obviously, $1/2^{n} to 0$ as $n$ grows without bound, and $2$ is not affected by $n$, so we conclude



    $$lim_{ntoinfty} S(n) = lim_{ntoinfty} 2 - lim_{ntoinfty} frac{1}{2^n} = 2 - 0 = 2$$



    And thus we say



    $$sum_{k=0}^infty left(frac 1 2 right)^k = 1 + frac 1 2 + frac 1 4 + frac 1 8 + ... = 2$$



    because the partial sums approach $2$.





    On Continued Fractions:



    That was a simple, "first" sort of example, but mathematicians essentially do the same thing in other contexts. I want to touch on one more such context before we deal with the radical case, just to nail that point home.



    In this case, it will be with continued fractions. One of the simpler such fractions is the one for $1$:



    $$1 = frac{1}{2-frac{1}{2-frac{1}{2-frac{1}{...}}}}$$



    As usual, the "..." denotes that this continues forever. But what it does it mean for this infinite expression to equal $1$?



    For this, we consider a more general analogue of the "partial sum" from before - a "convergent." We cut up the sequence at logical finite points, whatever those points being depending on the context. Then if the sequence of the convergents approaches a limit, we say they're equal.



    What are the convergents for a continued fraction? By convention, we cut off just before the start of the next fraction. That is, in the continued fraction for $1$, we cut off at the $n^{th} ; 2$ for the $n^{th}$ convergent, and ignore what follows. So we get the sequence of convergents



    $$frac{1}{2} , frac{1}{2-frac{1}{2}}, frac{1}{2-frac{1}{2-frac{1}{2}}},...$$



    Working out the numbers, we find the sequence to be



    $$frac{1}{2},frac{2}{3},frac{3}{4},...$$



    Again, we see a pattern! The $n^{th}$ term of the sequence is clearly of the form



    $$frac{n-1}{n}$$



    Let $C(n)$ be a function denoting the $n^{th}$ convergent. Then $C(1)=1/2,$ $C(2) = 2/3,$ $C(n)=(n-1)/n,$ and so on. So like before we consider the infinite limit:



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} frac{n-1}{n} = lim_{ntoinfty} 1 - frac 1 n = lim_{ntoinfty} 1 - lim_{ntoinfty} frac 1 n = 1 - 0 = 1$$



    Thus we can conclude that the continued fraction equals $1$, because its sequence of convergents equals $1$!





    On Infinite Radicals:



    So now, we touch on infinite nested radicals. They're messier to deal with but doable.



    One of the simpler examples of such radicals to contend with is



    $$2 = sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +...}}}}}$$



    As with the previous two cases we see an infinite expression. We instinctively conclude by now: to logically define a limit for this expression - to assign it a value provided it even exists - we need to chop this up at finite points, defining a sequence of convergents $C(n)$, and then find $C(n)$ as $ntoinfty$.



    Nested radicals are a lot messier than the previous, but we manage.



    So first let the sequence of convergents be given by cutting off everything after the $n^{th} ; 2$ in the expression. Thus we get the sequence



    $$sqrt 2 ;;,;; sqrt{2 + sqrt{2}};;,;; sqrt{2+sqrt{2+sqrt{2}}};;,;; sqrt{2+sqrt{2+sqrt{2+sqrt{2}}}}$$



    Okay this isn't particularly nice already, but apparently there does exist, shockingly enough, a closed-form explicit expression for $C(n)$: (from: S. Zimmerman, C. Ho)



    $$C(n) = 2cosleft(frac{pi}{2^{n+1}}right)$$



    (I had to find that expression by Googling, I honestly didn't know that offhand. It can be proved by induction, as touched on in this MSE question.)



    So luckily, then, we can find the limit of $C(n)$:



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} 2cosleft(frac{pi}{2^{n+1}}right)$$



    It is probably obvious that the argument of the cosine function approaches $0$ as $n$ grows without bound, and thus



    $$lim_{ntoinfty} C(n) = lim_{ntoinfty} 2cosleft(frac{pi}{2^{n+1}}right) = 2cos(0) = 2cdot 1 = 2$$



    Thus, since its convergents approach $2$, we can conclude that



    $$2 = sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +sqrt{2 +...}}}}}$$





    A Lengthy Conclusion:



    So, in short, how do we evaluate an infinite expression, be it radical, continued fraction, sum, or otherwise?



    We begin by truncating the expression at convenient finite places, creating a series of convergents, generalizations of the "partial sums" introduced in calculus. We then try to get a closed form or some other usable expression for the convergents $C(n)$, and consider the value as $ntoinfty$. If it converges to some value, we say that the expression is in fact equal to that value. If it doesn't, then the expression doesn't converge to any value.



    This doesn't mean each expression is "nice." Radical expressions in particular, in my experience, tend to be nasty as all hell, and I'm lucky I found that one closed form expression for the particularly easy radical I chose.



    This doesn't mean that other methods cannot be used to find the values, so long as there's some sort of logical justification for said method. For example, there is a justification for the formula for an infinite (and finite) geometric sum. We might have to circumvent the notion of partial sums entirely, or at least it might be convenient to do so. For example, with the Basel problem, Euler's proof focused on Maclaurin series, and none of this "convergent" stuff. (That proof is here plus other proofs of it!)



    Luckily, at least, this notion of convergents, even if it may not always be the best way to do it, lends itself to an easy way to check a solution to any such problem. Just find a bunch of the convergents - take as many as you need. If you somehow have multiple solutions, as you think with Ramanujan's radical, then you'll see the convergents get closer and closer to the "true" solution.



    (How many convergents you need to find depends on the situation and how close your proposed solutions are. It might be immediately obvious after $10$ iterations, or might not be until $10,000,000$. This logic also relies on the assumption that there is only one solution to a given expression that is valid. Depending on the context, you might see cases where multiple solutions are valid but this "approaching by hand" method will only get you some of the solutions. This touches on the notion of "unstable" and "stable" solutions to dynamical systems - which I believe is the main context where such would pop up - but it's a bit overkill to discuss that for this post.)



    So I will conclude by showing, in this way, that the solution is $3$ to Ramanujan's radical.



    We begin with the radical itself:



    $$sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}=3$$



    Let us begin by getting a series of convergents:



    $$sqrt{1} ;;,;; sqrt{1 + 2sqrt{1}} ;;,;; sqrt{1 + 2sqrt{1 + 3sqrt{1}}} ;;,;;$$



    Because the $sqrt{1}$ isn't necessary, we just let it be $1$.



    $$1 ;;,;; sqrt{1 + 2} ;;,;; sqrt{1 + 2sqrt{1 + 3}} ;;,;;$$



    Okay so ... where to go from here? Honestly, my initial temptation was to just use a MATLAB script and evaluate it, but I can't think of even a recursive closed form for this that would be nice enough. So in any event, we just have to go by "hand" (and by hand I mean WolframAlpha). Let $C(n)$ be the $n^{th}$ convergent. Then




    • $C(1) = 1$

    • $C(2) approx 1.732$

    • $C(3) approx 2.236$

    • $C(4) approx 2.560$

    • $C(5) approx 2.755$

    • $C(6) approx 2.867$

    • $C(7) approx 2.929$

    • $C(8) approx 2.963$

    • $C(9) approx 2.981$

    • $C(10) approx 2.990$


    To skip a few values because at this point the changes get minimal, I used a macro to make a quick code for $C(50)$ so I could put it into Microsoft Excel and got the approximate result



    $$C(50) approx 2.999 ; 999 ; 999 ; 999 ; 99$$



    So while not the most rigorous result, we can at least on an intuitive level feel like the convergents from Ramanujan's radical converge to $3$, not $4$ or any other number. Neglecting that this is not an ironclad proof of the convergence, at least intuitively then we can feel like



    $$3 = sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$





    Whew! Hopefully that lengthy post was helpful to you!





    A late footnote, but Mathologer on YouTube did a video on this very topic, so his video would give a decent summary of all this stuff as well. Here's a link.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited 12 hours ago









    ronorono

    31




    31










    answered yesterday









    Eevee TrainerEevee Trainer

    6,53811237




    6,53811237








    • 3




      $begingroup$
      On Mathematica you can do f[x_, n_] := Sqrt[1 + n * x]; ram[x0_, n_] := Fold[f, Sqrt[x0], Reverse @ Range[2, n]]; The results suggest that the radical approaches 3 regardless of the innermost initial value.
      $endgroup$
      – lastresort
      yesterday














    • 3




      $begingroup$
      On Mathematica you can do f[x_, n_] := Sqrt[1 + n * x]; ram[x0_, n_] := Fold[f, Sqrt[x0], Reverse @ Range[2, n]]; The results suggest that the radical approaches 3 regardless of the innermost initial value.
      $endgroup$
      – lastresort
      yesterday








    3




    3




    $begingroup$
    On Mathematica you can do f[x_, n_] := Sqrt[1 + n * x]; ram[x0_, n_] := Fold[f, Sqrt[x0], Reverse @ Range[2, n]]; The results suggest that the radical approaches 3 regardless of the innermost initial value.
    $endgroup$
    – lastresort
    yesterday




    $begingroup$
    On Mathematica you can do f[x_, n_] := Sqrt[1 + n * x]; ram[x0_, n_] := Fold[f, Sqrt[x0], Reverse @ Range[2, n]]; The results suggest that the radical approaches 3 regardless of the innermost initial value.
    $endgroup$
    – lastresort
    yesterday











    12












    $begingroup$

    The user @Eevee Trainer provided a nice explanation on how we define infinite nested radical in terms of limit of finite nested radical which should be insensitive of the starting point. For full generality in this regard, we can consider the convergence of the following finite nested radical



    $$ sqrt{1 + 2 sqrt{1 + 3 sqrt{ 1 + cdots n sqrt{1 + (n+1) a_n}}}} tag{*} $$



    for a given sequence $(a_n)$ of non-negative real numbers. In this answer, I will prove the convergence of $text{(*)}$ to $3$ under a mild condition. My solution will involve some preliminary knowledge on analysis.





    Setting. We consider the map $Phi$, defined on the space of all functions from $[0,infty)$ to $[0, infty)$, which is given by



    $$ Phi [f](x) = sqrt{1 + xf(x+1)}. $$



    Let us check how $Phi$ is related to our problem. The idea is to apply the trick of computing Ramanujan's infinite nested radical not to a single number, but rather to a function. Here, we choose $F(x) = x+1$. Since



    $$ F(x) = 1+x = sqrt{1+x(x+2)} = sqrt{1 + xF(x+1)} = Phi[F](x), $$



    it follows that we can iterated $Phi$ several times to obtain



    $$ F(x) = Phi^{circ n}[F](x) = sqrt{1 + xsqrt{1 + (x+1)sqrt{1 + cdots (x+n-1)sqrt{(x+n+1)^2}}}}, $$



    where $Phi^{circ n} = Phi circ cdots circ Phi$ is the $n$-fold function composition of $Phi$. Of course, the original radical corresponds to the case $x = 2$. This already proves that $Phi^{circ n}[F](x)$ converges to $F(x)$ as $ntoinfty$.



    On the other hand, infinite nested radicals do not have any designated value to start with, and so, the above computation is far from satisfactory when it comes to defining infinite nested radical. Thus, a form of robustness of the convergence is required. In this regard, @Eevee Trainer investigated the convergence of



    $$Phi^{circ n}[1](2) = sqrt{1 + 2sqrt{1 + 3sqrt{1 + cdots (n+1)sqrt{1}}}}, $$



    and confirmed numerically that this still tends to $F(2) = 3$ as $n$ grows. Of course, it will be ideal if we can verify the same conclusion for other choices starting points and using rigorous argument.



    Proof. One nice thing about $Phi$ is that it enjoys monotone, i.e., if $f leq g$, then $Phi [f] leq Phi [g]$. From this, it is easy to establish the following observation.




    Lemma 1. For any $f geq 0$, we have $liminf_{ntoinfty} Phi^{circ n}[f](x) geq x+1$.




    Proof. We inductively apply the monotonicity of $Phi$ to find that



    $$Phi^{circ (n+1)} [0](x) geq x^{1-2^{-n}} qquad text{and} qquad Phi^{circ n}[x](x) geq x + 1 - 2^{-n}. $$



    So, for any integer $m geq 0$,



    begin{align*}
    liminf_{ntoinfty} Phi^{circ n}[f](x)
    &geq liminf_{ntoinfty} Phi^{circ m}[Phi^{circ (n+1)}[0]](x)
    geq liminf_{ntoinfty} Phi^{circ m}[x^{1-2^{-(n-1)}}](x) \
    &= Phi^{circ m}[x](x)
    geq x + 1 - 2^{-m},
    end{align*}



    and letting $m to infty$ proves Lemma 1 as required.




    Lemma 2. If $limsup_{xtoinfty} frac{log log max{e, f(x)}}{x} < log 2$, then $limsup_{ntoinfty} Phi^{circ n}[f](x) leq x+1$.




    Proof. By the assumption, there exists $C > 1$ and $alpha in [0, 2)$ such that $f(x) leq C e^{alpha^x} (x + 1)$. Again, applying monotonicity of $Phi$, we check that $Phi^{circ n}[f](x) leq C^{2^{-n}} e^{(alpha/2)^n alpha^x} (x+1)$ holds. This is certainly true if $n = 0$. Moreover, assuming that this is true for $n$,



    begin{align*}
    Phi^{circ (n+1)}[f](x)
    &leq Phileft[C^{2^{-n}} e^{(alpha/2)^n alpha^x} (x+1)right](x) \
    &= left( 1 + x C^{2^{-n}} e^{(alpha/2)^n alpha^{x+1}} (x+2) right)^{1/2} \
    &leq C^{2^{-n-1}} e^{(alpha/2)^{n+1} alpha^{x}} (x+1).
    end{align*}



    Now letting $n to infty$ proves the desired result.




    Corollary. If $a_n geq 0$ satisfies $limsup_{ntoinfty} frac{loglog max{e, a_n}}{n} < log 2$, then for any $x geq 0$,



    $$ lim_{ntoinfty} sqrt{1 + x sqrt{1 + (x+1) sqrt{ 1 + cdots + (x+n-2) sqrt{1 + (x+n-1) a_n }}}} = x+1. $$




    Proof. Apply Lemma 1 and 2 to the function $f$ which interpolates $(a_n)$, such as using piecewise linear interpolation.





    The bound in Lemma 2 and Corollary is optimal. Indeed, consider the non-example in OP's question of expanding $4$ as in Ramanujan's infinite nested radical. So we define the sequence $a_n$ so as to satisfy



    $$ sqrt{1 + 2 sqrt{1 + 3 sqrt{ 1 + cdots n sqrt{1 + (n+1) a_n}}}} = 4. $$



    Its first 4 terms are given as follows.



    $$ a_1 = frac{15}{2}, quad
    a_2 = frac{221}{12}, quad
    a_3 = frac{48697}{576}, quad
    a_4 = frac{2371066033}{1658880}, quad cdots. $$



    Then we can show that $frac{1}{n}log log a_n to log 2$ as $n to infty$.






    share|cite|improve this answer











    $endgroup$


















      12












      $begingroup$

      The user @Eevee Trainer provided a nice explanation on how we define infinite nested radical in terms of limit of finite nested radical which should be insensitive of the starting point. For full generality in this regard, we can consider the convergence of the following finite nested radical



      $$ sqrt{1 + 2 sqrt{1 + 3 sqrt{ 1 + cdots n sqrt{1 + (n+1) a_n}}}} tag{*} $$



      for a given sequence $(a_n)$ of non-negative real numbers. In this answer, I will prove the convergence of $text{(*)}$ to $3$ under a mild condition. My solution will involve some preliminary knowledge on analysis.





      Setting. We consider the map $Phi$, defined on the space of all functions from $[0,infty)$ to $[0, infty)$, which is given by



      $$ Phi [f](x) = sqrt{1 + xf(x+1)}. $$



      Let us check how $Phi$ is related to our problem. The idea is to apply the trick of computing Ramanujan's infinite nested radical not to a single number, but rather to a function. Here, we choose $F(x) = x+1$. Since



      $$ F(x) = 1+x = sqrt{1+x(x+2)} = sqrt{1 + xF(x+1)} = Phi[F](x), $$



      it follows that we can iterated $Phi$ several times to obtain



      $$ F(x) = Phi^{circ n}[F](x) = sqrt{1 + xsqrt{1 + (x+1)sqrt{1 + cdots (x+n-1)sqrt{(x+n+1)^2}}}}, $$



      where $Phi^{circ n} = Phi circ cdots circ Phi$ is the $n$-fold function composition of $Phi$. Of course, the original radical corresponds to the case $x = 2$. This already proves that $Phi^{circ n}[F](x)$ converges to $F(x)$ as $ntoinfty$.



      On the other hand, infinite nested radicals do not have any designated value to start with, and so, the above computation is far from satisfactory when it comes to defining infinite nested radical. Thus, a form of robustness of the convergence is required. In this regard, @Eevee Trainer investigated the convergence of



      $$Phi^{circ n}[1](2) = sqrt{1 + 2sqrt{1 + 3sqrt{1 + cdots (n+1)sqrt{1}}}}, $$



      and confirmed numerically that this still tends to $F(2) = 3$ as $n$ grows. Of course, it will be ideal if we can verify the same conclusion for other choices starting points and using rigorous argument.



      Proof. One nice thing about $Phi$ is that it enjoys monotone, i.e., if $f leq g$, then $Phi [f] leq Phi [g]$. From this, it is easy to establish the following observation.




      Lemma 1. For any $f geq 0$, we have $liminf_{ntoinfty} Phi^{circ n}[f](x) geq x+1$.




      Proof. We inductively apply the monotonicity of $Phi$ to find that



      $$Phi^{circ (n+1)} [0](x) geq x^{1-2^{-n}} qquad text{and} qquad Phi^{circ n}[x](x) geq x + 1 - 2^{-n}. $$



      So, for any integer $m geq 0$,



      begin{align*}
      liminf_{ntoinfty} Phi^{circ n}[f](x)
      &geq liminf_{ntoinfty} Phi^{circ m}[Phi^{circ (n+1)}[0]](x)
      geq liminf_{ntoinfty} Phi^{circ m}[x^{1-2^{-(n-1)}}](x) \
      &= Phi^{circ m}[x](x)
      geq x + 1 - 2^{-m},
      end{align*}



      and letting $m to infty$ proves Lemma 1 as required.




      Lemma 2. If $limsup_{xtoinfty} frac{log log max{e, f(x)}}{x} < log 2$, then $limsup_{ntoinfty} Phi^{circ n}[f](x) leq x+1$.




      Proof. By the assumption, there exists $C > 1$ and $alpha in [0, 2)$ such that $f(x) leq C e^{alpha^x} (x + 1)$. Again, applying monotonicity of $Phi$, we check that $Phi^{circ n}[f](x) leq C^{2^{-n}} e^{(alpha/2)^n alpha^x} (x+1)$ holds. This is certainly true if $n = 0$. Moreover, assuming that this is true for $n$,



      begin{align*}
      Phi^{circ (n+1)}[f](x)
      &leq Phileft[C^{2^{-n}} e^{(alpha/2)^n alpha^x} (x+1)right](x) \
      &= left( 1 + x C^{2^{-n}} e^{(alpha/2)^n alpha^{x+1}} (x+2) right)^{1/2} \
      &leq C^{2^{-n-1}} e^{(alpha/2)^{n+1} alpha^{x}} (x+1).
      end{align*}



      Now letting $n to infty$ proves the desired result.




      Corollary. If $a_n geq 0$ satisfies $limsup_{ntoinfty} frac{loglog max{e, a_n}}{n} < log 2$, then for any $x geq 0$,



      $$ lim_{ntoinfty} sqrt{1 + x sqrt{1 + (x+1) sqrt{ 1 + cdots + (x+n-2) sqrt{1 + (x+n-1) a_n }}}} = x+1. $$




      Proof. Apply Lemma 1 and 2 to the function $f$ which interpolates $(a_n)$, such as using piecewise linear interpolation.





      The bound in Lemma 2 and Corollary is optimal. Indeed, consider the non-example in OP's question of expanding $4$ as in Ramanujan's infinite nested radical. So we define the sequence $a_n$ so as to satisfy



      $$ sqrt{1 + 2 sqrt{1 + 3 sqrt{ 1 + cdots n sqrt{1 + (n+1) a_n}}}} = 4. $$



      Its first 4 terms are given as follows.



      $$ a_1 = frac{15}{2}, quad
      a_2 = frac{221}{12}, quad
      a_3 = frac{48697}{576}, quad
      a_4 = frac{2371066033}{1658880}, quad cdots. $$



      Then we can show that $frac{1}{n}log log a_n to log 2$ as $n to infty$.






      share|cite|improve this answer











      $endgroup$
















        12












        12








        12





        $begingroup$

        The user @Eevee Trainer provided a nice explanation on how we define infinite nested radical in terms of limit of finite nested radical which should be insensitive of the starting point. For full generality in this regard, we can consider the convergence of the following finite nested radical



        $$ sqrt{1 + 2 sqrt{1 + 3 sqrt{ 1 + cdots n sqrt{1 + (n+1) a_n}}}} tag{*} $$



        for a given sequence $(a_n)$ of non-negative real numbers. In this answer, I will prove the convergence of $text{(*)}$ to $3$ under a mild condition. My solution will involve some preliminary knowledge on analysis.





        Setting. We consider the map $Phi$, defined on the space of all functions from $[0,infty)$ to $[0, infty)$, which is given by



        $$ Phi [f](x) = sqrt{1 + xf(x+1)}. $$



        Let us check how $Phi$ is related to our problem. The idea is to apply the trick of computing Ramanujan's infinite nested radical not to a single number, but rather to a function. Here, we choose $F(x) = x+1$. Since



        $$ F(x) = 1+x = sqrt{1+x(x+2)} = sqrt{1 + xF(x+1)} = Phi[F](x), $$



        it follows that we can iterated $Phi$ several times to obtain



        $$ F(x) = Phi^{circ n}[F](x) = sqrt{1 + xsqrt{1 + (x+1)sqrt{1 + cdots (x+n-1)sqrt{(x+n+1)^2}}}}, $$



        where $Phi^{circ n} = Phi circ cdots circ Phi$ is the $n$-fold function composition of $Phi$. Of course, the original radical corresponds to the case $x = 2$. This already proves that $Phi^{circ n}[F](x)$ converges to $F(x)$ as $ntoinfty$.



        On the other hand, infinite nested radicals do not have any designated value to start with, and so, the above computation is far from satisfactory when it comes to defining infinite nested radical. Thus, a form of robustness of the convergence is required. In this regard, @Eevee Trainer investigated the convergence of



        $$Phi^{circ n}[1](2) = sqrt{1 + 2sqrt{1 + 3sqrt{1 + cdots (n+1)sqrt{1}}}}, $$



        and confirmed numerically that this still tends to $F(2) = 3$ as $n$ grows. Of course, it will be ideal if we can verify the same conclusion for other choices starting points and using rigorous argument.



        Proof. One nice thing about $Phi$ is that it enjoys monotone, i.e., if $f leq g$, then $Phi [f] leq Phi [g]$. From this, it is easy to establish the following observation.




        Lemma 1. For any $f geq 0$, we have $liminf_{ntoinfty} Phi^{circ n}[f](x) geq x+1$.




        Proof. We inductively apply the monotonicity of $Phi$ to find that



        $$Phi^{circ (n+1)} [0](x) geq x^{1-2^{-n}} qquad text{and} qquad Phi^{circ n}[x](x) geq x + 1 - 2^{-n}. $$



        So, for any integer $m geq 0$,



        begin{align*}
        liminf_{ntoinfty} Phi^{circ n}[f](x)
        &geq liminf_{ntoinfty} Phi^{circ m}[Phi^{circ (n+1)}[0]](x)
        geq liminf_{ntoinfty} Phi^{circ m}[x^{1-2^{-(n-1)}}](x) \
        &= Phi^{circ m}[x](x)
        geq x + 1 - 2^{-m},
        end{align*}



        and letting $m to infty$ proves Lemma 1 as required.




        Lemma 2. If $limsup_{xtoinfty} frac{log log max{e, f(x)}}{x} < log 2$, then $limsup_{ntoinfty} Phi^{circ n}[f](x) leq x+1$.




        Proof. By the assumption, there exists $C > 1$ and $alpha in [0, 2)$ such that $f(x) leq C e^{alpha^x} (x + 1)$. Again, applying monotonicity of $Phi$, we check that $Phi^{circ n}[f](x) leq C^{2^{-n}} e^{(alpha/2)^n alpha^x} (x+1)$ holds. This is certainly true if $n = 0$. Moreover, assuming that this is true for $n$,



        begin{align*}
        Phi^{circ (n+1)}[f](x)
        &leq Phileft[C^{2^{-n}} e^{(alpha/2)^n alpha^x} (x+1)right](x) \
        &= left( 1 + x C^{2^{-n}} e^{(alpha/2)^n alpha^{x+1}} (x+2) right)^{1/2} \
        &leq C^{2^{-n-1}} e^{(alpha/2)^{n+1} alpha^{x}} (x+1).
        end{align*}



        Now letting $n to infty$ proves the desired result.




        Corollary. If $a_n geq 0$ satisfies $limsup_{ntoinfty} frac{loglog max{e, a_n}}{n} < log 2$, then for any $x geq 0$,



        $$ lim_{ntoinfty} sqrt{1 + x sqrt{1 + (x+1) sqrt{ 1 + cdots + (x+n-2) sqrt{1 + (x+n-1) a_n }}}} = x+1. $$




        Proof. Apply Lemma 1 and 2 to the function $f$ which interpolates $(a_n)$, such as using piecewise linear interpolation.





        The bound in Lemma 2 and Corollary is optimal. Indeed, consider the non-example in OP's question of expanding $4$ as in Ramanujan's infinite nested radical. So we define the sequence $a_n$ so as to satisfy



        $$ sqrt{1 + 2 sqrt{1 + 3 sqrt{ 1 + cdots n sqrt{1 + (n+1) a_n}}}} = 4. $$



        Its first 4 terms are given as follows.



        $$ a_1 = frac{15}{2}, quad
        a_2 = frac{221}{12}, quad
        a_3 = frac{48697}{576}, quad
        a_4 = frac{2371066033}{1658880}, quad cdots. $$



        Then we can show that $frac{1}{n}log log a_n to log 2$ as $n to infty$.






        share|cite|improve this answer











        $endgroup$



        The user @Eevee Trainer provided a nice explanation on how we define infinite nested radical in terms of limit of finite nested radical which should be insensitive of the starting point. For full generality in this regard, we can consider the convergence of the following finite nested radical



        $$ sqrt{1 + 2 sqrt{1 + 3 sqrt{ 1 + cdots n sqrt{1 + (n+1) a_n}}}} tag{*} $$



        for a given sequence $(a_n)$ of non-negative real numbers. In this answer, I will prove the convergence of $text{(*)}$ to $3$ under a mild condition. My solution will involve some preliminary knowledge on analysis.





        Setting. We consider the map $Phi$, defined on the space of all functions from $[0,infty)$ to $[0, infty)$, which is given by



        $$ Phi [f](x) = sqrt{1 + xf(x+1)}. $$



        Let us check how $Phi$ is related to our problem. The idea is to apply the trick of computing Ramanujan's infinite nested radical not to a single number, but rather to a function. Here, we choose $F(x) = x+1$. Since



        $$ F(x) = 1+x = sqrt{1+x(x+2)} = sqrt{1 + xF(x+1)} = Phi[F](x), $$



        it follows that we can iterated $Phi$ several times to obtain



        $$ F(x) = Phi^{circ n}[F](x) = sqrt{1 + xsqrt{1 + (x+1)sqrt{1 + cdots (x+n-1)sqrt{(x+n+1)^2}}}}, $$



        where $Phi^{circ n} = Phi circ cdots circ Phi$ is the $n$-fold function composition of $Phi$. Of course, the original radical corresponds to the case $x = 2$. This already proves that $Phi^{circ n}[F](x)$ converges to $F(x)$ as $ntoinfty$.



        On the other hand, infinite nested radicals do not have any designated value to start with, and so, the above computation is far from satisfactory when it comes to defining infinite nested radical. Thus, a form of robustness of the convergence is required. In this regard, @Eevee Trainer investigated the convergence of



        $$Phi^{circ n}[1](2) = sqrt{1 + 2sqrt{1 + 3sqrt{1 + cdots (n+1)sqrt{1}}}}, $$



        and confirmed numerically that this still tends to $F(2) = 3$ as $n$ grows. Of course, it will be ideal if we can verify the same conclusion for other choices starting points and using rigorous argument.



        Proof. One nice thing about $Phi$ is that it enjoys monotone, i.e., if $f leq g$, then $Phi [f] leq Phi [g]$. From this, it is easy to establish the following observation.




        Lemma 1. For any $f geq 0$, we have $liminf_{ntoinfty} Phi^{circ n}[f](x) geq x+1$.




        Proof. We inductively apply the monotonicity of $Phi$ to find that



        $$Phi^{circ (n+1)} [0](x) geq x^{1-2^{-n}} qquad text{and} qquad Phi^{circ n}[x](x) geq x + 1 - 2^{-n}. $$



        So, for any integer $m geq 0$,



        begin{align*}
        liminf_{ntoinfty} Phi^{circ n}[f](x)
        &geq liminf_{ntoinfty} Phi^{circ m}[Phi^{circ (n+1)}[0]](x)
        geq liminf_{ntoinfty} Phi^{circ m}[x^{1-2^{-(n-1)}}](x) \
        &= Phi^{circ m}[x](x)
        geq x + 1 - 2^{-m},
        end{align*}



        and letting $m to infty$ proves Lemma 1 as required.




        Lemma 2. If $limsup_{xtoinfty} frac{log log max{e, f(x)}}{x} < log 2$, then $limsup_{ntoinfty} Phi^{circ n}[f](x) leq x+1$.




        Proof. By the assumption, there exists $C > 1$ and $alpha in [0, 2)$ such that $f(x) leq C e^{alpha^x} (x + 1)$. Again, applying monotonicity of $Phi$, we check that $Phi^{circ n}[f](x) leq C^{2^{-n}} e^{(alpha/2)^n alpha^x} (x+1)$ holds. This is certainly true if $n = 0$. Moreover, assuming that this is true for $n$,



        begin{align*}
        Phi^{circ (n+1)}[f](x)
        &leq Phileft[C^{2^{-n}} e^{(alpha/2)^n alpha^x} (x+1)right](x) \
        &= left( 1 + x C^{2^{-n}} e^{(alpha/2)^n alpha^{x+1}} (x+2) right)^{1/2} \
        &leq C^{2^{-n-1}} e^{(alpha/2)^{n+1} alpha^{x}} (x+1).
        end{align*}



        Now letting $n to infty$ proves the desired result.




        Corollary. If $a_n geq 0$ satisfies $limsup_{ntoinfty} frac{loglog max{e, a_n}}{n} < log 2$, then for any $x geq 0$,



        $$ lim_{ntoinfty} sqrt{1 + x sqrt{1 + (x+1) sqrt{ 1 + cdots + (x+n-2) sqrt{1 + (x+n-1) a_n }}}} = x+1. $$




        Proof. Apply Lemma 1 and 2 to the function $f$ which interpolates $(a_n)$, such as using piecewise linear interpolation.





        The bound in Lemma 2 and Corollary is optimal. Indeed, consider the non-example in OP's question of expanding $4$ as in Ramanujan's infinite nested radical. So we define the sequence $a_n$ so as to satisfy



        $$ sqrt{1 + 2 sqrt{1 + 3 sqrt{ 1 + cdots n sqrt{1 + (n+1) a_n}}}} = 4. $$



        Its first 4 terms are given as follows.



        $$ a_1 = frac{15}{2}, quad
        a_2 = frac{221}{12}, quad
        a_3 = frac{48697}{576}, quad
        a_4 = frac{2371066033}{1658880}, quad cdots. $$



        Then we can show that $frac{1}{n}log log a_n to log 2$ as $n to infty$.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited 18 hours ago

























        answered 19 hours ago









        Sangchul LeeSangchul Lee

        94.3k12169274




        94.3k12169274























            7












            $begingroup$

            As others have said, the rigorous definition of an infinite expression comes from the limit of a sequence of finite terms. The terms need to be well-defined, but in practice, we just try to make sure the pattern is clear from context.



            Now let's see what goes wrong with your other example. You wrote:



            $$4 = sqrt{16}=sqrt{1+2sqrt{56.25}}=sqrt{1+2sqrt{1+3sqrt{frac{48841}{144}}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



            The problem is that each term in the sequence (e.g. if we stop at $4$) fails to include an "extra" amount (and this amount is not going to zero). So if we look at the partial sums, we see they won't converge to $4$ unless we include the extra amounts we keep pushing to the right. It's the same logical mistake as taking



            begin{align*}
            2 &= 1 + 1 \
            &= frac{1}{2} + frac{1}{2} + 1 \
            &= frac{1}{2} + frac{1}{4} + frac{1}{4} + 1 \
            &= frac{1}{2} + frac{1}{4} + frac{1}{8} + cdots + 1
            end{align*}

            And then saying, wait, the partial sums in the last line only converge to $1$ instead of $2$. But this is a mistake because we can't push the extra $1$ "infinitely far" right. Otherwise, the partial sum terms we write down will just look like $frac{1}{2} + frac{1}{4} + cdots$ and will never include the $1$. The same thing is happening (roughly) in your example.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              However, the terms in the argument for the value being 3 are 16, 25, 36, ... also not converging to 0.
              $endgroup$
              – M.Herzkamp
              yesterday










            • $begingroup$
              @M.Herzkamp, I've edited to avoid that example. To look at the extra terms rigorously, you really have to subtract the entire term in the sequence from the total, which doesn't have a simple closed form in the square root examples.
              $endgroup$
              – usul
              yesterday


















            7












            $begingroup$

            As others have said, the rigorous definition of an infinite expression comes from the limit of a sequence of finite terms. The terms need to be well-defined, but in practice, we just try to make sure the pattern is clear from context.



            Now let's see what goes wrong with your other example. You wrote:



            $$4 = sqrt{16}=sqrt{1+2sqrt{56.25}}=sqrt{1+2sqrt{1+3sqrt{frac{48841}{144}}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



            The problem is that each term in the sequence (e.g. if we stop at $4$) fails to include an "extra" amount (and this amount is not going to zero). So if we look at the partial sums, we see they won't converge to $4$ unless we include the extra amounts we keep pushing to the right. It's the same logical mistake as taking



            begin{align*}
            2 &= 1 + 1 \
            &= frac{1}{2} + frac{1}{2} + 1 \
            &= frac{1}{2} + frac{1}{4} + frac{1}{4} + 1 \
            &= frac{1}{2} + frac{1}{4} + frac{1}{8} + cdots + 1
            end{align*}

            And then saying, wait, the partial sums in the last line only converge to $1$ instead of $2$. But this is a mistake because we can't push the extra $1$ "infinitely far" right. Otherwise, the partial sum terms we write down will just look like $frac{1}{2} + frac{1}{4} + cdots$ and will never include the $1$. The same thing is happening (roughly) in your example.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              However, the terms in the argument for the value being 3 are 16, 25, 36, ... also not converging to 0.
              $endgroup$
              – M.Herzkamp
              yesterday










            • $begingroup$
              @M.Herzkamp, I've edited to avoid that example. To look at the extra terms rigorously, you really have to subtract the entire term in the sequence from the total, which doesn't have a simple closed form in the square root examples.
              $endgroup$
              – usul
              yesterday
















            7












            7








            7





            $begingroup$

            As others have said, the rigorous definition of an infinite expression comes from the limit of a sequence of finite terms. The terms need to be well-defined, but in practice, we just try to make sure the pattern is clear from context.



            Now let's see what goes wrong with your other example. You wrote:



            $$4 = sqrt{16}=sqrt{1+2sqrt{56.25}}=sqrt{1+2sqrt{1+3sqrt{frac{48841}{144}}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



            The problem is that each term in the sequence (e.g. if we stop at $4$) fails to include an "extra" amount (and this amount is not going to zero). So if we look at the partial sums, we see they won't converge to $4$ unless we include the extra amounts we keep pushing to the right. It's the same logical mistake as taking



            begin{align*}
            2 &= 1 + 1 \
            &= frac{1}{2} + frac{1}{2} + 1 \
            &= frac{1}{2} + frac{1}{4} + frac{1}{4} + 1 \
            &= frac{1}{2} + frac{1}{4} + frac{1}{8} + cdots + 1
            end{align*}

            And then saying, wait, the partial sums in the last line only converge to $1$ instead of $2$. But this is a mistake because we can't push the extra $1$ "infinitely far" right. Otherwise, the partial sum terms we write down will just look like $frac{1}{2} + frac{1}{4} + cdots$ and will never include the $1$. The same thing is happening (roughly) in your example.






            share|cite|improve this answer











            $endgroup$



            As others have said, the rigorous definition of an infinite expression comes from the limit of a sequence of finite terms. The terms need to be well-defined, but in practice, we just try to make sure the pattern is clear from context.



            Now let's see what goes wrong with your other example. You wrote:



            $$4 = sqrt{16}=sqrt{1+2sqrt{56.25}}=sqrt{1+2sqrt{1+3sqrt{frac{48841}{144}}}}=...=sqrt{1+2sqrt{1+3sqrt{1+4sqrt{1+cdots}}}}$$



            The problem is that each term in the sequence (e.g. if we stop at $4$) fails to include an "extra" amount (and this amount is not going to zero). So if we look at the partial sums, we see they won't converge to $4$ unless we include the extra amounts we keep pushing to the right. It's the same logical mistake as taking



            begin{align*}
            2 &= 1 + 1 \
            &= frac{1}{2} + frac{1}{2} + 1 \
            &= frac{1}{2} + frac{1}{4} + frac{1}{4} + 1 \
            &= frac{1}{2} + frac{1}{4} + frac{1}{8} + cdots + 1
            end{align*}

            And then saying, wait, the partial sums in the last line only converge to $1$ instead of $2$. But this is a mistake because we can't push the extra $1$ "infinitely far" right. Otherwise, the partial sum terms we write down will just look like $frac{1}{2} + frac{1}{4} + cdots$ and will never include the $1$. The same thing is happening (roughly) in your example.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited yesterday

























            answered yesterday









            usulusul

            1,6671422




            1,6671422












            • $begingroup$
              However, the terms in the argument for the value being 3 are 16, 25, 36, ... also not converging to 0.
              $endgroup$
              – M.Herzkamp
              yesterday










            • $begingroup$
              @M.Herzkamp, I've edited to avoid that example. To look at the extra terms rigorously, you really have to subtract the entire term in the sequence from the total, which doesn't have a simple closed form in the square root examples.
              $endgroup$
              – usul
              yesterday




















            • $begingroup$
              However, the terms in the argument for the value being 3 are 16, 25, 36, ... also not converging to 0.
              $endgroup$
              – M.Herzkamp
              yesterday










            • $begingroup$
              @M.Herzkamp, I've edited to avoid that example. To look at the extra terms rigorously, you really have to subtract the entire term in the sequence from the total, which doesn't have a simple closed form in the square root examples.
              $endgroup$
              – usul
              yesterday


















            $begingroup$
            However, the terms in the argument for the value being 3 are 16, 25, 36, ... also not converging to 0.
            $endgroup$
            – M.Herzkamp
            yesterday




            $begingroup$
            However, the terms in the argument for the value being 3 are 16, 25, 36, ... also not converging to 0.
            $endgroup$
            – M.Herzkamp
            yesterday












            $begingroup$
            @M.Herzkamp, I've edited to avoid that example. To look at the extra terms rigorously, you really have to subtract the entire term in the sequence from the total, which doesn't have a simple closed form in the square root examples.
            $endgroup$
            – usul
            yesterday






            $begingroup$
            @M.Herzkamp, I've edited to avoid that example. To look at the extra terms rigorously, you really have to subtract the entire term in the sequence from the total, which doesn't have a simple closed form in the square root examples.
            $endgroup$
            – usul
            yesterday













            3












            $begingroup$

            Your example doesn't quite capture the spirit of the Ramanujan expression: it arises from the succession of the integers, not from ad hoc computation to try to fit the expression to a value.



            Let's make a sketch of an induction proof. Take the base case $n=3$:
            $$sqrt{n^2}=3$$
            and, for the inductive step, take the radical at the "deepest" level of the chain and apply



            $$sqrt{k^2}=sqrt{1+(k-1)(k+1)}$$
            $$=sqrt{1+(k-1)sqrt{(k+1)^2}}$$



            this gives us exactly the sequence we need. This means we can take any $N>=3$ and construct the expression with $sqrt{N^2}$ in the deepest radical, and know that the equality with 3 has been preserved. (What does this tell us about the limit at infinity?)



            For the 4 example, there's no analogous step to get from one term of the sequence to the next; you have to compute the term at the lowest level by working through the whole chain each time.






            share|cite|improve this answer








            New contributor




            ripkoops is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$













            • $begingroup$
              Thanks for your help. However, gathering all the helps from the community so far, I would say I disagree with your points. In the case of '4' , we can definitely define a sequence of nested radicals(although the general term is super ugly) that is well-defined and converges to 4. The problem is the expression of the infinite nested radical alone is not telling us all the information. It is never properly defined by itself.
              $endgroup$
              – Anson NG
              19 hours ago










            • $begingroup$
              It true that your sequence converges to 4 (because that's how it's constructed), but in some sense it doesn't converge to the abstracted sequence in question here. Think about the expression sqrt(1+2*sqrt(1+3*sqrt(1+...))). What's the additive contribution of the "..." part (i.e. the difference between this and sqrt(1+2*sqrt(1+3*sqrt(1))))? We should expect it to approach 0 as we go deeper into the chain if we want to arrive at a limit, but in your derivation you're squaring it at each term of the sequence, which offsets the reduction you'd see from it being within another square root.
              $endgroup$
              – ripkoops
              10 hours ago
















            3












            $begingroup$

            Your example doesn't quite capture the spirit of the Ramanujan expression: it arises from the succession of the integers, not from ad hoc computation to try to fit the expression to a value.



            Let's make a sketch of an induction proof. Take the base case $n=3$:
            $$sqrt{n^2}=3$$
            and, for the inductive step, take the radical at the "deepest" level of the chain and apply



            $$sqrt{k^2}=sqrt{1+(k-1)(k+1)}$$
            $$=sqrt{1+(k-1)sqrt{(k+1)^2}}$$



            this gives us exactly the sequence we need. This means we can take any $N>=3$ and construct the expression with $sqrt{N^2}$ in the deepest radical, and know that the equality with 3 has been preserved. (What does this tell us about the limit at infinity?)



            For the 4 example, there's no analogous step to get from one term of the sequence to the next; you have to compute the term at the lowest level by working through the whole chain each time.






            share|cite|improve this answer








            New contributor




            ripkoops is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$













            • $begingroup$
              Thanks for your help. However, gathering all the helps from the community so far, I would say I disagree with your points. In the case of '4' , we can definitely define a sequence of nested radicals(although the general term is super ugly) that is well-defined and converges to 4. The problem is the expression of the infinite nested radical alone is not telling us all the information. It is never properly defined by itself.
              $endgroup$
              – Anson NG
              19 hours ago










            • $begingroup$
              It true that your sequence converges to 4 (because that's how it's constructed), but in some sense it doesn't converge to the abstracted sequence in question here. Think about the expression sqrt(1+2*sqrt(1+3*sqrt(1+...))). What's the additive contribution of the "..." part (i.e. the difference between this and sqrt(1+2*sqrt(1+3*sqrt(1))))? We should expect it to approach 0 as we go deeper into the chain if we want to arrive at a limit, but in your derivation you're squaring it at each term of the sequence, which offsets the reduction you'd see from it being within another square root.
              $endgroup$
              – ripkoops
              10 hours ago














            3












            3








            3





            $begingroup$

            Your example doesn't quite capture the spirit of the Ramanujan expression: it arises from the succession of the integers, not from ad hoc computation to try to fit the expression to a value.



            Let's make a sketch of an induction proof. Take the base case $n=3$:
            $$sqrt{n^2}=3$$
            and, for the inductive step, take the radical at the "deepest" level of the chain and apply



            $$sqrt{k^2}=sqrt{1+(k-1)(k+1)}$$
            $$=sqrt{1+(k-1)sqrt{(k+1)^2}}$$



            this gives us exactly the sequence we need. This means we can take any $N>=3$ and construct the expression with $sqrt{N^2}$ in the deepest radical, and know that the equality with 3 has been preserved. (What does this tell us about the limit at infinity?)



            For the 4 example, there's no analogous step to get from one term of the sequence to the next; you have to compute the term at the lowest level by working through the whole chain each time.






            share|cite|improve this answer








            New contributor




            ripkoops is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$



            Your example doesn't quite capture the spirit of the Ramanujan expression: it arises from the succession of the integers, not from ad hoc computation to try to fit the expression to a value.



            Let's make a sketch of an induction proof. Take the base case $n=3$:
            $$sqrt{n^2}=3$$
            and, for the inductive step, take the radical at the "deepest" level of the chain and apply



            $$sqrt{k^2}=sqrt{1+(k-1)(k+1)}$$
            $$=sqrt{1+(k-1)sqrt{(k+1)^2}}$$



            this gives us exactly the sequence we need. This means we can take any $N>=3$ and construct the expression with $sqrt{N^2}$ in the deepest radical, and know that the equality with 3 has been preserved. (What does this tell us about the limit at infinity?)



            For the 4 example, there's no analogous step to get from one term of the sequence to the next; you have to compute the term at the lowest level by working through the whole chain each time.







            share|cite|improve this answer








            New contributor




            ripkoops is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            share|cite|improve this answer



            share|cite|improve this answer






            New contributor




            ripkoops is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.









            answered 22 hours ago









            ripkoopsripkoops

            312




            312




            New contributor




            ripkoops is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.





            New contributor





            ripkoops is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            ripkoops is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.












            • $begingroup$
              Thanks for your help. However, gathering all the helps from the community so far, I would say I disagree with your points. In the case of '4' , we can definitely define a sequence of nested radicals(although the general term is super ugly) that is well-defined and converges to 4. The problem is the expression of the infinite nested radical alone is not telling us all the information. It is never properly defined by itself.
              $endgroup$
              – Anson NG
              19 hours ago










            • $begingroup$
              It true that your sequence converges to 4 (because that's how it's constructed), but in some sense it doesn't converge to the abstracted sequence in question here. Think about the expression sqrt(1+2*sqrt(1+3*sqrt(1+...))). What's the additive contribution of the "..." part (i.e. the difference between this and sqrt(1+2*sqrt(1+3*sqrt(1))))? We should expect it to approach 0 as we go deeper into the chain if we want to arrive at a limit, but in your derivation you're squaring it at each term of the sequence, which offsets the reduction you'd see from it being within another square root.
              $endgroup$
              – ripkoops
              10 hours ago


















            • $begingroup$
              Thanks for your help. However, gathering all the helps from the community so far, I would say I disagree with your points. In the case of '4' , we can definitely define a sequence of nested radicals(although the general term is super ugly) that is well-defined and converges to 4. The problem is the expression of the infinite nested radical alone is not telling us all the information. It is never properly defined by itself.
              $endgroup$
              – Anson NG
              19 hours ago










            • $begingroup$
              It true that your sequence converges to 4 (because that's how it's constructed), but in some sense it doesn't converge to the abstracted sequence in question here. Think about the expression sqrt(1+2*sqrt(1+3*sqrt(1+...))). What's the additive contribution of the "..." part (i.e. the difference between this and sqrt(1+2*sqrt(1+3*sqrt(1))))? We should expect it to approach 0 as we go deeper into the chain if we want to arrive at a limit, but in your derivation you're squaring it at each term of the sequence, which offsets the reduction you'd see from it being within another square root.
              $endgroup$
              – ripkoops
              10 hours ago
















            $begingroup$
            Thanks for your help. However, gathering all the helps from the community so far, I would say I disagree with your points. In the case of '4' , we can definitely define a sequence of nested radicals(although the general term is super ugly) that is well-defined and converges to 4. The problem is the expression of the infinite nested radical alone is not telling us all the information. It is never properly defined by itself.
            $endgroup$
            – Anson NG
            19 hours ago




            $begingroup$
            Thanks for your help. However, gathering all the helps from the community so far, I would say I disagree with your points. In the case of '4' , we can definitely define a sequence of nested radicals(although the general term is super ugly) that is well-defined and converges to 4. The problem is the expression of the infinite nested radical alone is not telling us all the information. It is never properly defined by itself.
            $endgroup$
            – Anson NG
            19 hours ago












            $begingroup$
            It true that your sequence converges to 4 (because that's how it's constructed), but in some sense it doesn't converge to the abstracted sequence in question here. Think about the expression sqrt(1+2*sqrt(1+3*sqrt(1+...))). What's the additive contribution of the "..." part (i.e. the difference between this and sqrt(1+2*sqrt(1+3*sqrt(1))))? We should expect it to approach 0 as we go deeper into the chain if we want to arrive at a limit, but in your derivation you're squaring it at each term of the sequence, which offsets the reduction you'd see from it being within another square root.
            $endgroup$
            – ripkoops
            10 hours ago




            $begingroup$
            It true that your sequence converges to 4 (because that's how it's constructed), but in some sense it doesn't converge to the abstracted sequence in question here. Think about the expression sqrt(1+2*sqrt(1+3*sqrt(1+...))). What's the additive contribution of the "..." part (i.e. the difference between this and sqrt(1+2*sqrt(1+3*sqrt(1))))? We should expect it to approach 0 as we go deeper into the chain if we want to arrive at a limit, but in your derivation you're squaring it at each term of the sequence, which offsets the reduction you'd see from it being within another square root.
            $endgroup$
            – ripkoops
            10 hours ago











            0












            $begingroup$

            $4=sqrt{16}=sqrt{1+3sqrt{25}}=sqrt{1+3sqrt{1+4sqrt{36}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{49}}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{64}}}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{81}}}}}}$



            $=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{1+cdots}}}}}}}$



            $5=sqrt{25}=sqrt{1+4sqrt{36}}=sqrt{1+4sqrt{1+5sqrt{49}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{64}}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{81}}}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{100}}}}}}$



            $=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{1+9sqrt{1+cdots}}}}}}}$



            $vdots$



            $n=sqrt{1+(n-1)sqrt{1+nsqrt{1+(n+1)sqrt{1+(n+2)sqrt{1+(n+3)sqrt{1+(n+4)sqrt{1+cdots}}}}}}}$






            share|cite|improve this answer











            $endgroup$









            • 2




              $begingroup$
              This does not answer the question.
              $endgroup$
              – Wojowu
              yesterday






            • 2




              $begingroup$
              @Wojowu: however, you have to admit that it is a nice piece of information, but too much for a comment. A little explanation might be beneficial though...
              $endgroup$
              – M.Herzkamp
              yesterday
















            0












            $begingroup$

            $4=sqrt{16}=sqrt{1+3sqrt{25}}=sqrt{1+3sqrt{1+4sqrt{36}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{49}}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{64}}}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{81}}}}}}$



            $=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{1+cdots}}}}}}}$



            $5=sqrt{25}=sqrt{1+4sqrt{36}}=sqrt{1+4sqrt{1+5sqrt{49}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{64}}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{81}}}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{100}}}}}}$



            $=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{1+9sqrt{1+cdots}}}}}}}$



            $vdots$



            $n=sqrt{1+(n-1)sqrt{1+nsqrt{1+(n+1)sqrt{1+(n+2)sqrt{1+(n+3)sqrt{1+(n+4)sqrt{1+cdots}}}}}}}$






            share|cite|improve this answer











            $endgroup$









            • 2




              $begingroup$
              This does not answer the question.
              $endgroup$
              – Wojowu
              yesterday






            • 2




              $begingroup$
              @Wojowu: however, you have to admit that it is a nice piece of information, but too much for a comment. A little explanation might be beneficial though...
              $endgroup$
              – M.Herzkamp
              yesterday














            0












            0








            0





            $begingroup$

            $4=sqrt{16}=sqrt{1+3sqrt{25}}=sqrt{1+3sqrt{1+4sqrt{36}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{49}}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{64}}}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{81}}}}}}$



            $=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{1+cdots}}}}}}}$



            $5=sqrt{25}=sqrt{1+4sqrt{36}}=sqrt{1+4sqrt{1+5sqrt{49}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{64}}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{81}}}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{100}}}}}}$



            $=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{1+9sqrt{1+cdots}}}}}}}$



            $vdots$



            $n=sqrt{1+(n-1)sqrt{1+nsqrt{1+(n+1)sqrt{1+(n+2)sqrt{1+(n+3)sqrt{1+(n+4)sqrt{1+cdots}}}}}}}$






            share|cite|improve this answer











            $endgroup$



            $4=sqrt{16}=sqrt{1+3sqrt{25}}=sqrt{1+3sqrt{1+4sqrt{36}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{49}}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{64}}}}}=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{81}}}}}}$



            $=sqrt{1+3sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{1+cdots}}}}}}}$



            $5=sqrt{25}=sqrt{1+4sqrt{36}}=sqrt{1+4sqrt{1+5sqrt{49}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{64}}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{81}}}}}=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{100}}}}}}$



            $=sqrt{1+4sqrt{1+5sqrt{1+6sqrt{1+7sqrt{1+8sqrt{1+9sqrt{1+cdots}}}}}}}$



            $vdots$



            $n=sqrt{1+(n-1)sqrt{1+nsqrt{1+(n+1)sqrt{1+(n+2)sqrt{1+(n+3)sqrt{1+(n+4)sqrt{1+cdots}}}}}}}$







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited yesterday

























            answered yesterday









            Okkes DulgerciOkkes Dulgerci

            1353




            1353








            • 2




              $begingroup$
              This does not answer the question.
              $endgroup$
              – Wojowu
              yesterday






            • 2




              $begingroup$
              @Wojowu: however, you have to admit that it is a nice piece of information, but too much for a comment. A little explanation might be beneficial though...
              $endgroup$
              – M.Herzkamp
              yesterday














            • 2




              $begingroup$
              This does not answer the question.
              $endgroup$
              – Wojowu
              yesterday






            • 2




              $begingroup$
              @Wojowu: however, you have to admit that it is a nice piece of information, but too much for a comment. A little explanation might be beneficial though...
              $endgroup$
              – M.Herzkamp
              yesterday








            2




            2




            $begingroup$
            This does not answer the question.
            $endgroup$
            – Wojowu
            yesterday




            $begingroup$
            This does not answer the question.
            $endgroup$
            – Wojowu
            yesterday




            2




            2




            $begingroup$
            @Wojowu: however, you have to admit that it is a nice piece of information, but too much for a comment. A little explanation might be beneficial though...
            $endgroup$
            – M.Herzkamp
            yesterday




            $begingroup$
            @Wojowu: however, you have to admit that it is a nice piece of information, but too much for a comment. A little explanation might be beneficial though...
            $endgroup$
            – M.Herzkamp
            yesterday


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3119631%2framanujans-radical-and-how-we-define-an-infinite-nested-radical%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Discografia di Klaus Schulze Indice Album in studio | Album dal vivo | Singoli | Antologie | Colonne...

            Lupi Siderali Indice Storia | Organizzazione | La Tredicesima Compagnia | Aspetto | Membri Importanti...

            Armoriale delle famiglie italiane (Car) Indice Armi | Bibliografia | Menu di navigazioneBlasone...