Method to test if a number is a perfect power?Detecting perfect squares faster than by extracting square rooteffective way to get the integer sequence A181392 from oeisA rarely mentioned fact about perfect powersHow many numbers such $n$ are there that $n<100,lfloorsqrtn rfloor mid n$Check perfect squareness by modulo division against multiple basesFor what pair of integers $(a,b)$ is $3^a + 7^b$ a perfect square.Do there exist any positive integers $n$ such that $lfloore^nrfloor$ is a perfect power? What is the probability that one exists?finding perfect power factors of an integerProve that the sequence contains a perfect square for any natural number $m $ in the domain of $f$ .Counting Perfect Powers

DOS, create pipe for stdin/stdout of command.com(or 4dos.com) in C or Batch?

Is Social Media Science Fiction?

I see my dog run

How to make payment on the internet without leaving a money trail?

A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?

XeLaTeX and pdfLaTeX ignore hyphenation

How long does it take to type this?

How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?

What do you call a Matrix-like slowdown and camera movement effect?

How does one intimidate enemies without having the capacity for violence?

Motorized valve interfering with button?

My colleague's body is amazing

How did the USSR manage to innovate in an environment characterized by government censorship and high bureaucracy?

Is there a familial term for apples and pears?

Why can't I see bouncing of a switch on an oscilloscope?

Is it tax fraud for an individual to declare non-taxable revenue as taxable income? (US tax laws)

whey we use polarized capacitor?

Shell script can be run only with sh command

Could a US political party gain complete control over the government by removing checks & balances?

What are these boxed doors outside store fronts in New York?

Why doesn't Newton's third law mean a person bounces back to where they started when they hit the ground?

Banach space and Hilbert space topology

Why is this code 6.5x slower with optimizations enabled?

Should I join office cleaning event for free?



Method to test if a number is a perfect power?


Detecting perfect squares faster than by extracting square rooteffective way to get the integer sequence A181392 from oeisA rarely mentioned fact about perfect powersHow many numbers such $n$ are there that $n<100,lfloorsqrtn rfloor mid n$Check perfect squareness by modulo division against multiple basesFor what pair of integers $(a,b)$ is $3^a + 7^b$ a perfect square.Do there exist any positive integers $n$ such that $lfloore^nrfloor$ is a perfect power? What is the probability that one exists?finding perfect power factors of an integerProve that the sequence contains a perfect square for any natural number $m $ in the domain of $f$ .Counting Perfect Powers













6












$begingroup$


Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt121rfloor=sqrt121$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).










share|cite|improve this question











$endgroup$







  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    Mar 27 at 21:29










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    Mar 27 at 21:35






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    Mar 27 at 21:40










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    Mar 27 at 22:01






  • 5




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    Mar 27 at 22:09
















6












$begingroup$


Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt121rfloor=sqrt121$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).










share|cite|improve this question











$endgroup$







  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    Mar 27 at 21:29










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    Mar 27 at 21:35






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    Mar 27 at 21:40










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    Mar 27 at 22:01






  • 5




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    Mar 27 at 22:09














6












6








6


2



$begingroup$


Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt121rfloor=sqrt121$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).










share|cite|improve this question











$endgroup$




Is there a general method for testing numbers to see if they are perfect $n$th powers?



For example, suppose that I did not know that $121$ was a perfect square. A naive test in a code might be to see if
$$lfloorsqrt121rfloor=sqrt121$$



But I imagine there are much more efficient ways of doing this (if I'm working with numbers with many digits).







number-theory perfect-powers






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 27 at 21:29









Chase Ryan Taylor

4,46221531




4,46221531










asked Mar 27 at 21:17









D.B.D.B.

1,52519




1,52519







  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    Mar 27 at 21:29










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    Mar 27 at 21:35






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    Mar 27 at 21:40










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    Mar 27 at 22:01






  • 5




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    Mar 27 at 22:09













  • 2




    $begingroup$
    One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
    $endgroup$
    – Alex R.
    Mar 27 at 21:29










  • $begingroup$
    Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
    $endgroup$
    – Servaes
    Mar 27 at 21:35






  • 1




    $begingroup$
    @Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
    $endgroup$
    – D.B.
    Mar 27 at 21:40










  • $begingroup$
    Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
    $endgroup$
    – D.B.
    Mar 27 at 22:01






  • 5




    $begingroup$
    @D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
    $endgroup$
    – Alex R.
    Mar 27 at 22:09








2




2




$begingroup$
One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
$endgroup$
– Alex R.
Mar 27 at 21:29




$begingroup$
One very cheap, necessary condition is that $x^2pmod 4equiv 0,1$.
$endgroup$
– Alex R.
Mar 27 at 21:29












$begingroup$
Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
$endgroup$
– Servaes
Mar 27 at 21:35




$begingroup$
Are you given numbers $k$ and $n$ and asked to check whether $k$ is an $n$-th power? Or are you given just $k$ and asked to check whether $k$ is a perfect power?
$endgroup$
– Servaes
Mar 27 at 21:35




1




1




$begingroup$
@Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
$endgroup$
– D.B.
Mar 27 at 21:40




$begingroup$
@Servaes, I was considering the first case, where I know both k and n and trying to see if $k = a^n,$ a a positive integer.
$endgroup$
– D.B.
Mar 27 at 21:40












$begingroup$
Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
$endgroup$
– D.B.
Mar 27 at 22:01




$begingroup$
Wait, @Alex R. Looking at your first comment, what about $x^2 = 40 = 0 (mod 4)$. Yet, $40$ is not a perfect square.
$endgroup$
– D.B.
Mar 27 at 22:01




5




5




$begingroup$
@D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
$endgroup$
– Alex R.
Mar 27 at 22:09





$begingroup$
@D.B.: Hence it's a necessary condition: if $x^2$ is a perfect square, then $x^2equiv 0,1pmod4$. The other direction gives: if $yequiv 2,3pmod4$, then $y$ cannot be a perfect square.
$endgroup$
– Alex R.
Mar 27 at 22:09











4 Answers
4






active

oldest

votes


















18












$begingroup$

See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



https://cr.yp.to/papers/powers-ams.pdf






share|cite|improve this answer









$endgroup$




















    2












    $begingroup$

    In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



    $$k = a^n tag1labeleq1$$



    where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. For $n$ not being too large, one relatively fast & easy way to show that $a$ is not an integer is get all factors of $n$ and check those which are $1$ less than a prime. In those cases, Fermat's little theorem shows that $k$ must be congruent to $0$ or $1$ modulo this prime. If it's not, then $a$ can't be an integer. Doing this test is especially useful if you are using a fixed value of $n$ and checking many values of $k$.



    If you don't do the initial check above or it passes, you can next perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac1.0n$, to get something like



    $$a = sqrt[n]k tag2labeleq2$$



    As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^11$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^-10$ and relative difference was $1.332ldots times 10^-15$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



    Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



    $$ln(k) = nln(a) ; Rightarrow ; ln(a) = fracln(k)n ; Rightarrow ; a = e^fracln(k)n tag3labeleq3$$



    As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve a larger cumulative error than using eqrefeq2 instead.



    Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



    You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqrefeq2, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frack -k_2k_1 - k_2right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take few loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being determined when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).



    As for overall speed & accuracy type issues, this depends a lot on aspects like the machine, compiler, number types & final algorithm used. I assume the values fit within 64-bit unsigned integers since most machines don't have native integer types larger than this, larger floating-point values have difficulties with representing all integers (e.g., you get situations where $f + 1.0$ is stored as $f$) & packages providing a larger # of integer bits can be fairly slow. As my earlier test results show, the root calculations took an average of less than $10^-7$ seconds each, i.e., a few hundred machine instructions at the most. The accuracy for a single calculation would be better than the $10^-10$ for multiple ones, with it likely being at least around $10^-12$. Assuming the value of $a$ only goes up to about $10^17$ (as the max. value for an unsigned integer is about $1.85 times 10^19$), this would give a maximum initial error of $a$ being at the most in the thousands. The algorithm I propose to determine the final value should usually reduce the error by at least a factor of $2$ each time, so it shouldn't take more than about $10$ or $20$ iterations. Getting the value of $a^n$ to check can be done relatively quickly by starting with $a$, repeatedly squaring the result & then multiplying together the ones needed based on the base $2$ representation of $n$, so it will take on the order of $log_2 n$ operations. Overall, it should usually take at most a few thousand machine instructions, for a time of around a micro-second (i.e., $10^-6$ seconds) on most computers.






    share|cite|improve this answer











    $endgroup$








    • 1




      $begingroup$
      you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
      $endgroup$
      – miracle173
      Mar 27 at 23:19










    • $begingroup$
      I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
      $endgroup$
      – Solomon Ucko
      Mar 28 at 1:34







    • 1




      $begingroup$
      @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
      $endgroup$
      – John Omielan
      Mar 28 at 1:40











    • $begingroup$
      @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
      $endgroup$
      – John Omielan
      Mar 28 at 18:40


















    0












    $begingroup$

    My suggestion on a computer is to run a root finder.



    Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



    You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






    share|cite|improve this answer









    $endgroup$












    • $begingroup$
      I don't think it's linear, given that you need to square the proposed number at every split.
      $endgroup$
      – Alex R.
      Mar 27 at 21:46











    • $begingroup$
      @AlexR. you are right
      $endgroup$
      – gt6989b
      Mar 28 at 14:36


















    0












    $begingroup$

    It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag1$$ or prove that such numbers don't exists.



    We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



    We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "69"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3165146%2fmethod-to-test-if-a-number-is-a-perfect-power%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      18












      $begingroup$

      See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



      https://cr.yp.to/papers/powers-ams.pdf






      share|cite|improve this answer









      $endgroup$

















        18












        $begingroup$

        See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



        https://cr.yp.to/papers/powers-ams.pdf






        share|cite|improve this answer









        $endgroup$















          18












          18








          18





          $begingroup$

          See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



          https://cr.yp.to/papers/powers-ams.pdf






          share|cite|improve this answer









          $endgroup$



          See Detecting perfect powers in essentially linear time - Daniel J. Bernstein:



          https://cr.yp.to/papers/powers-ams.pdf







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Mar 27 at 21:37









          Alex J BestAlex J Best

          2,44611227




          2,44611227





















              2












              $begingroup$

              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag1labeleq1$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. For $n$ not being too large, one relatively fast & easy way to show that $a$ is not an integer is get all factors of $n$ and check those which are $1$ less than a prime. In those cases, Fermat's little theorem shows that $k$ must be congruent to $0$ or $1$ modulo this prime. If it's not, then $a$ can't be an integer. Doing this test is especially useful if you are using a fixed value of $n$ and checking many values of $k$.



              If you don't do the initial check above or it passes, you can next perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac1.0n$, to get something like



              $$a = sqrt[n]k tag2labeleq2$$



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^11$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^-10$ and relative difference was $1.332ldots times 10^-15$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = fracln(k)n ; Rightarrow ; a = e^fracln(k)n tag3labeleq3$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve a larger cumulative error than using eqrefeq2 instead.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqrefeq2, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frack -k_2k_1 - k_2right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take few loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being determined when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).



              As for overall speed & accuracy type issues, this depends a lot on aspects like the machine, compiler, number types & final algorithm used. I assume the values fit within 64-bit unsigned integers since most machines don't have native integer types larger than this, larger floating-point values have difficulties with representing all integers (e.g., you get situations where $f + 1.0$ is stored as $f$) & packages providing a larger # of integer bits can be fairly slow. As my earlier test results show, the root calculations took an average of less than $10^-7$ seconds each, i.e., a few hundred machine instructions at the most. The accuracy for a single calculation would be better than the $10^-10$ for multiple ones, with it likely being at least around $10^-12$. Assuming the value of $a$ only goes up to about $10^17$ (as the max. value for an unsigned integer is about $1.85 times 10^19$), this would give a maximum initial error of $a$ being at the most in the thousands. The algorithm I propose to determine the final value should usually reduce the error by at least a factor of $2$ each time, so it shouldn't take more than about $10$ or $20$ iterations. Getting the value of $a^n$ to check can be done relatively quickly by starting with $a$, repeatedly squaring the result & then multiplying together the ones needed based on the base $2$ representation of $n$, so it will take on the order of $log_2 n$ operations. Overall, it should usually take at most a few thousand machine instructions, for a time of around a micro-second (i.e., $10^-6$ seconds) on most computers.






              share|cite|improve this answer











              $endgroup$








              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                Mar 27 at 23:19










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                Mar 28 at 1:34







              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                Mar 28 at 1:40











              • $begingroup$
                @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
                $endgroup$
                – John Omielan
                Mar 28 at 18:40















              2












              $begingroup$

              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag1labeleq1$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. For $n$ not being too large, one relatively fast & easy way to show that $a$ is not an integer is get all factors of $n$ and check those which are $1$ less than a prime. In those cases, Fermat's little theorem shows that $k$ must be congruent to $0$ or $1$ modulo this prime. If it's not, then $a$ can't be an integer. Doing this test is especially useful if you are using a fixed value of $n$ and checking many values of $k$.



              If you don't do the initial check above or it passes, you can next perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac1.0n$, to get something like



              $$a = sqrt[n]k tag2labeleq2$$



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^11$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^-10$ and relative difference was $1.332ldots times 10^-15$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = fracln(k)n ; Rightarrow ; a = e^fracln(k)n tag3labeleq3$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve a larger cumulative error than using eqrefeq2 instead.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqrefeq2, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frack -k_2k_1 - k_2right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take few loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being determined when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).



              As for overall speed & accuracy type issues, this depends a lot on aspects like the machine, compiler, number types & final algorithm used. I assume the values fit within 64-bit unsigned integers since most machines don't have native integer types larger than this, larger floating-point values have difficulties with representing all integers (e.g., you get situations where $f + 1.0$ is stored as $f$) & packages providing a larger # of integer bits can be fairly slow. As my earlier test results show, the root calculations took an average of less than $10^-7$ seconds each, i.e., a few hundred machine instructions at the most. The accuracy for a single calculation would be better than the $10^-10$ for multiple ones, with it likely being at least around $10^-12$. Assuming the value of $a$ only goes up to about $10^17$ (as the max. value for an unsigned integer is about $1.85 times 10^19$), this would give a maximum initial error of $a$ being at the most in the thousands. The algorithm I propose to determine the final value should usually reduce the error by at least a factor of $2$ each time, so it shouldn't take more than about $10$ or $20$ iterations. Getting the value of $a^n$ to check can be done relatively quickly by starting with $a$, repeatedly squaring the result & then multiplying together the ones needed based on the base $2$ representation of $n$, so it will take on the order of $log_2 n$ operations. Overall, it should usually take at most a few thousand machine instructions, for a time of around a micro-second (i.e., $10^-6$ seconds) on most computers.






              share|cite|improve this answer











              $endgroup$








              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                Mar 27 at 23:19










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                Mar 28 at 1:34







              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                Mar 28 at 1:40











              • $begingroup$
                @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
                $endgroup$
                – John Omielan
                Mar 28 at 18:40













              2












              2








              2





              $begingroup$

              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag1labeleq1$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. For $n$ not being too large, one relatively fast & easy way to show that $a$ is not an integer is get all factors of $n$ and check those which are $1$ less than a prime. In those cases, Fermat's little theorem shows that $k$ must be congruent to $0$ or $1$ modulo this prime. If it's not, then $a$ can't be an integer. Doing this test is especially useful if you are using a fixed value of $n$ and checking many values of $k$.



              If you don't do the initial check above or it passes, you can next perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac1.0n$, to get something like



              $$a = sqrt[n]k tag2labeleq2$$



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^11$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^-10$ and relative difference was $1.332ldots times 10^-15$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = fracln(k)n ; Rightarrow ; a = e^fracln(k)n tag3labeleq3$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve a larger cumulative error than using eqrefeq2 instead.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqrefeq2, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frack -k_2k_1 - k_2right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take few loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being determined when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).



              As for overall speed & accuracy type issues, this depends a lot on aspects like the machine, compiler, number types & final algorithm used. I assume the values fit within 64-bit unsigned integers since most machines don't have native integer types larger than this, larger floating-point values have difficulties with representing all integers (e.g., you get situations where $f + 1.0$ is stored as $f$) & packages providing a larger # of integer bits can be fairly slow. As my earlier test results show, the root calculations took an average of less than $10^-7$ seconds each, i.e., a few hundred machine instructions at the most. The accuracy for a single calculation would be better than the $10^-10$ for multiple ones, with it likely being at least around $10^-12$. Assuming the value of $a$ only goes up to about $10^17$ (as the max. value for an unsigned integer is about $1.85 times 10^19$), this would give a maximum initial error of $a$ being at the most in the thousands. The algorithm I propose to determine the final value should usually reduce the error by at least a factor of $2$ each time, so it shouldn't take more than about $10$ or $20$ iterations. Getting the value of $a^n$ to check can be done relatively quickly by starting with $a$, repeatedly squaring the result & then multiplying together the ones needed based on the base $2$ representation of $n$, so it will take on the order of $log_2 n$ operations. Overall, it should usually take at most a few thousand machine instructions, for a time of around a micro-second (i.e., $10^-6$ seconds) on most computers.






              share|cite|improve this answer











              $endgroup$



              In the specific case where you already know not only the number being checked but also the power, as the question's comment by the OP to Servaes states, then you have something like



              $$k = a^n tag1labeleq1$$



              where $k$ and $n$ are known integers, but with $a$ being an unknown value to check whether or not it's an integer. For $n$ not being too large, one relatively fast & easy way to show that $a$ is not an integer is get all factors of $n$ and check those which are $1$ less than a prime. In those cases, Fermat's little theorem shows that $k$ must be congruent to $0$ or $1$ modulo this prime. If it's not, then $a$ can't be an integer. Doing this test is especially useful if you are using a fixed value of $n$ and checking many values of $k$.



              If you don't do the initial check above or it passes, you can next perhaps use a function to get the $n$'th root, such as using the "pow" function in C/C++ with a second argument of $frac1.0n$, to get something like



              $$a = sqrt[n]k tag2labeleq2$$



              As for speed & accuracy issues, I once did a test on all integers from $1$ to $5 times 10^11$ to get their sixth roots (using pow), then cubing them by multiplying together $3$ times, plus another test getting all square roots directly (using sqrt), and then finding the maximum of the absolute & relative differences. Note I used VS 2008 on an AMD FX(tm)-8320 Eight-Core Processor, 3.5 GZ, 8 GB RAM, 8 MB L2 & L3 cache, and 64-bit Windows 7 computer. Square roots took 16403 seconds, while sixth roots then cubing took 37915 seconds. Max. actual difference was $8.149ldots times 10^-10$ and relative difference was $1.332ldots times 10^-15$. This gives an indication of how relatively fast & accurate the library routines are, although results will obviously vary depending on the compiler & machine involved.



              Alternatively, taking natural logarithms of both sides (you could use any base, but I suspect that implementation wise $e$ will likely at least be the fastest one, if not also the most accurate) gives



              $$ln(k) = nln(a) ; Rightarrow ; ln(a) = fracln(k)n ; Rightarrow ; a = e^fracln(k)n tag3labeleq3$$



              As this involves $2$ basic steps of taking a logarithm and then exponentiating, this may take longer & involve a larger cumulative error than using eqrefeq2 instead.



              Using either method, on a computer, will give a floating point value that would be, even for large values of $k$, relatively close to the correct value of $a$.



              You can now use any number of algorithms to relatively quickly & easily determine $a$ if it's an integer, or show it's not an integer. For example, you can start with the integer part obtained in eqrefeq2, call it $a_1$, to determine $k_1$. If $k_1$ is not correct, then if it's less than $k$, check $a_2 = a_1 + 1$, else check $a_2 = a_1 - 1$, and call the new result $k_2$. If $k_2$ is still not correct, add or subtract the integer amount (making sure it's at least 1) of $left|frack -k_2k_1 - k_2right|$ to $a_2$ to get a new $a_1$ value to check. Then repeat these steps as many times as needed. In almost all cases, I believe it should take few loops to find the correct value. However, note you should also include checks in case there is no such integer $a$, with this usually being determined when one integer value gives a lower result & the next higher gives a higher result (or higher result & next lower integer gives a lower result).



              As for overall speed & accuracy type issues, this depends a lot on aspects like the machine, compiler, number types & final algorithm used. I assume the values fit within 64-bit unsigned integers since most machines don't have native integer types larger than this, larger floating-point values have difficulties with representing all integers (e.g., you get situations where $f + 1.0$ is stored as $f$) & packages providing a larger # of integer bits can be fairly slow. As my earlier test results show, the root calculations took an average of less than $10^-7$ seconds each, i.e., a few hundred machine instructions at the most. The accuracy for a single calculation would be better than the $10^-10$ for multiple ones, with it likely being at least around $10^-12$. Assuming the value of $a$ only goes up to about $10^17$ (as the max. value for an unsigned integer is about $1.85 times 10^19$), this would give a maximum initial error of $a$ being at the most in the thousands. The algorithm I propose to determine the final value should usually reduce the error by at least a factor of $2$ each time, so it shouldn't take more than about $10$ or $20$ iterations. Getting the value of $a^n$ to check can be done relatively quickly by starting with $a$, repeatedly squaring the result & then multiplying together the ones needed based on the base $2$ representation of $n$, so it will take on the order of $log_2 n$ operations. Overall, it should usually take at most a few thousand machine instructions, for a time of around a micro-second (i.e., $10^-6$ seconds) on most computers.







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Mar 28 at 18:35

























              answered Mar 27 at 22:42









              John OmielanJohn Omielan

              4,7362215




              4,7362215







              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                Mar 27 at 23:19










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                Mar 28 at 1:34







              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                Mar 28 at 1:40











              • $begingroup$
                @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
                $endgroup$
                – John Omielan
                Mar 28 at 18:40












              • 1




                $begingroup$
                you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
                $endgroup$
                – miracle173
                Mar 27 at 23:19










              • $begingroup$
                I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
                $endgroup$
                – Solomon Ucko
                Mar 28 at 1:34







              • 1




                $begingroup$
                @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
                $endgroup$
                – John Omielan
                Mar 28 at 1:40











              • $begingroup$
                @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
                $endgroup$
                – John Omielan
                Mar 28 at 18:40







              1




              1




              $begingroup$
              you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
              $endgroup$
              – miracle173
              Mar 27 at 23:19




              $begingroup$
              you skip important steps of your algorithm. How do you calculate $a = e^fracln(k)n$. What is the time and space complexity of this calculation? How big is the difference of the exact value of $e^fracln(k)n$ and the calculated value of $e^fracln(k)n$? Without calculating all this bounds it is not possible to decide if the algorithm is efficient.
              $endgroup$
              – miracle173
              Mar 27 at 23:19












              $begingroup$
              I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
              $endgroup$
              – Solomon Ucko
              Mar 28 at 1:34





              $begingroup$
              I'm not very familiar with how these are implemented, but wouldn't $log_2$ be the most efficient $log$?
              $endgroup$
              – Solomon Ucko
              Mar 28 at 1:34





              1




              1




              $begingroup$
              @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
              $endgroup$
              – John Omielan
              Mar 28 at 1:40





              $begingroup$
              @SolomonUcko It depends on the internal implementation, but note that although everything in a computer is basically base $2$, this doesn't help with many non-linear operations like $log$. However, due to certain natural logarithm and exponential properties, there are relatively fast & accurate implementations to determine their values, e.g., even just using a Taylor series expansion, that you don't have with other bases, such as $2$. In fact, I suspect many implementations would first determine the result in base $e$ & then convert to base $2$ before returning an answer.
              $endgroup$
              – John Omielan
              Mar 28 at 1:40













              $begingroup$
              @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
              $endgroup$
              – John Omielan
              Mar 28 at 18:40




              $begingroup$
              @miracle173 I don't believe "space" (I assume you mean memory) complexity will generally be an issue on even basic calculators & smart phones. As for the time complexity, it varies a lot depending on multiple factors, but I've tried to give at least rough bounds that I believe are fairly reasonable.
              $endgroup$
              – John Omielan
              Mar 28 at 18:40











              0












              $begingroup$

              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






              share|cite|improve this answer









              $endgroup$












              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                Mar 27 at 21:46











              • $begingroup$
                @AlexR. you are right
                $endgroup$
                – gt6989b
                Mar 28 at 14:36















              0












              $begingroup$

              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






              share|cite|improve this answer









              $endgroup$












              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                Mar 27 at 21:46











              • $begingroup$
                @AlexR. you are right
                $endgroup$
                – gt6989b
                Mar 28 at 14:36













              0












              0








              0





              $begingroup$

              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.






              share|cite|improve this answer









              $endgroup$



              My suggestion on a computer is to run a root finder.



              Given a value $y$, one way is to hard-code the first couple and then use an integer-valued binary search starting with $y/2$, which is logarithmic in $y$ and thus linear (since input takes $ln y$.



              You can also write down the Newton's method recurrence and see if it converges to an integer or not, should become clear after the first couple of steps, once the error becomes small enough.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Mar 27 at 21:40









              gt6989bgt6989b

              35.6k22557




              35.6k22557











              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                Mar 27 at 21:46











              • $begingroup$
                @AlexR. you are right
                $endgroup$
                – gt6989b
                Mar 28 at 14:36
















              • $begingroup$
                I don't think it's linear, given that you need to square the proposed number at every split.
                $endgroup$
                – Alex R.
                Mar 27 at 21:46











              • $begingroup$
                @AlexR. you are right
                $endgroup$
                – gt6989b
                Mar 28 at 14:36















              $begingroup$
              I don't think it's linear, given that you need to square the proposed number at every split.
              $endgroup$
              – Alex R.
              Mar 27 at 21:46





              $begingroup$
              I don't think it's linear, given that you need to square the proposed number at every split.
              $endgroup$
              – Alex R.
              Mar 27 at 21:46













              $begingroup$
              @AlexR. you are right
              $endgroup$
              – gt6989b
              Mar 28 at 14:36




              $begingroup$
              @AlexR. you are right
              $endgroup$
              – gt6989b
              Mar 28 at 14:36











              0












              $begingroup$

              It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag1$$ or prove that such numbers don't exists.



              We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



              We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






              share|cite|improve this answer









              $endgroup$

















                0












                $begingroup$

                It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag1$$ or prove that such numbers don't exists.



                We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



                We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






                share|cite|improve this answer









                $endgroup$















                  0












                  0








                  0





                  $begingroup$

                  It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag1$$ or prove that such numbers don't exists.



                  We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



                  We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.






                  share|cite|improve this answer









                  $endgroup$



                  It is at least possible to do this in polynomial time. Assume $n$ is a $k$-bit number and you want to find positive integers $a$ and $b$ such that $$a^b=ntag1$$ or prove that such numbers don't exists.



                  We have $$n<2^k$$ because $n$ is a $k$-bit number and so $$blt k$$



                  We can simply check for all possible $b$ if there is an $a$ such that $(1)$ holds. For given $b$ we can try to find $a$ by bisection. This bisection checks $O(log n)=O(k)$ different $a$. A check is the calculation of $a^b$. This can be achieved by multiplying powers of $a$ by $a$. These powers of $a$ are smaller than $n$. So we multiply $k$-bit numbers at most $b(lt k)$ times. A multiplication of two $k$-bit numbers needs $O(k^2)$ time. So all in all the algorithm needs $O(k^2)$ multiplications o $k$-bit numbers, which means $O(k^4)$ time.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Mar 27 at 22:52









                  miracle173miracle173

                  7,38022247




                  7,38022247



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3165146%2fmethod-to-test-if-a-number-is-a-perfect-power%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                      Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

                      Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High