Why does Tensorflow 1.11.0 return CUDA_ERROR_NOT_SUPPORTED?










0















My machine is Ubuntu 18.04.1 LTS, with CUDA has been successfully installed. The output of $nvcc --version is



nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176


I have two GPUs of Tesla K80, and the command nvidia-smi shows:



output of nvidia-smi



I also tried to test with ./deviceQuery from NVIDIA_CUDA-9.0_Samples and its output is as the followings:



CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 2 CUDA Capable device(s)`

...

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 9.0, NumDevs = 2
Result = PASS


However, once I install Tensorflow GPU version 1.11.0 from pip, I couldn't open a Tensorflow session.



>>> import tensorflow as tf
>>> sess = tf.Session()


and it outputs:



2018-11-15 00:13:46.593039: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/quoctin.phan/tools/anaconda/envs/tensorflow_1.11/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1511, in __init__
super(Session, self).__init__(target, graph, config=config)
File "/home/quoctin.phan/tools/anaconda/envs/tensorflow_1.11/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 634, in __init__
self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_NOT_SUPPORTED: operation not supported


I have tried to reinstall Tensorflow 1.12.0, but nothing changes. Your help is appreciated.










share|improve this question




























    0















    My machine is Ubuntu 18.04.1 LTS, with CUDA has been successfully installed. The output of $nvcc --version is



    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2017 NVIDIA Corporation
    Built on Fri_Sep__1_21:08:03_CDT_2017
    Cuda compilation tools, release 9.0, V9.0.176


    I have two GPUs of Tesla K80, and the command nvidia-smi shows:



    output of nvidia-smi



    I also tried to test with ./deviceQuery from NVIDIA_CUDA-9.0_Samples and its output is as the followings:



    CUDA Device Query (Runtime API) version (CUDART static linking)

    Detected 2 CUDA Capable device(s)`

    ...

    deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 9.0, NumDevs = 2
    Result = PASS


    However, once I install Tensorflow GPU version 1.11.0 from pip, I couldn't open a Tensorflow session.



    >>> import tensorflow as tf
    >>> sess = tf.Session()


    and it outputs:



    2018-11-15 00:13:46.593039: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
    Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    File "/home/quoctin.phan/tools/anaconda/envs/tensorflow_1.11/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1511, in __init__
    super(Session, self).__init__(target, graph, config=config)
    File "/home/quoctin.phan/tools/anaconda/envs/tensorflow_1.11/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 634, in __init__
    self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
    tensorflow.python.framework.errors_impl.InternalError: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_NOT_SUPPORTED: operation not supported


    I have tried to reinstall Tensorflow 1.12.0, but nothing changes. Your help is appreciated.










    share|improve this question


























      0












      0








      0


      1






      My machine is Ubuntu 18.04.1 LTS, with CUDA has been successfully installed. The output of $nvcc --version is



      nvcc: NVIDIA (R) Cuda compiler driver
      Copyright (c) 2005-2017 NVIDIA Corporation
      Built on Fri_Sep__1_21:08:03_CDT_2017
      Cuda compilation tools, release 9.0, V9.0.176


      I have two GPUs of Tesla K80, and the command nvidia-smi shows:



      output of nvidia-smi



      I also tried to test with ./deviceQuery from NVIDIA_CUDA-9.0_Samples and its output is as the followings:



      CUDA Device Query (Runtime API) version (CUDART static linking)

      Detected 2 CUDA Capable device(s)`

      ...

      deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 9.0, NumDevs = 2
      Result = PASS


      However, once I install Tensorflow GPU version 1.11.0 from pip, I couldn't open a Tensorflow session.



      >>> import tensorflow as tf
      >>> sess = tf.Session()


      and it outputs:



      2018-11-15 00:13:46.593039: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
      Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/quoctin.phan/tools/anaconda/envs/tensorflow_1.11/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1511, in __init__
      super(Session, self).__init__(target, graph, config=config)
      File "/home/quoctin.phan/tools/anaconda/envs/tensorflow_1.11/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 634, in __init__
      self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
      tensorflow.python.framework.errors_impl.InternalError: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_NOT_SUPPORTED: operation not supported


      I have tried to reinstall Tensorflow 1.12.0, but nothing changes. Your help is appreciated.










      share|improve this question
















      My machine is Ubuntu 18.04.1 LTS, with CUDA has been successfully installed. The output of $nvcc --version is



      nvcc: NVIDIA (R) Cuda compiler driver
      Copyright (c) 2005-2017 NVIDIA Corporation
      Built on Fri_Sep__1_21:08:03_CDT_2017
      Cuda compilation tools, release 9.0, V9.0.176


      I have two GPUs of Tesla K80, and the command nvidia-smi shows:



      output of nvidia-smi



      I also tried to test with ./deviceQuery from NVIDIA_CUDA-9.0_Samples and its output is as the followings:



      CUDA Device Query (Runtime API) version (CUDART static linking)

      Detected 2 CUDA Capable device(s)`

      ...

      deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 9.0, NumDevs = 2
      Result = PASS


      However, once I install Tensorflow GPU version 1.11.0 from pip, I couldn't open a Tensorflow session.



      >>> import tensorflow as tf
      >>> sess = tf.Session()


      and it outputs:



      2018-11-15 00:13:46.593039: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
      Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/home/quoctin.phan/tools/anaconda/envs/tensorflow_1.11/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1511, in __init__
      super(Session, self).__init__(target, graph, config=config)
      File "/home/quoctin.phan/tools/anaconda/envs/tensorflow_1.11/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 634, in __init__
      self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
      tensorflow.python.framework.errors_impl.InternalError: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_NOT_SUPPORTED: operation not supported


      I have tried to reinstall Tensorflow 1.12.0, but nothing changes. Your help is appreciated.







      python tensorflow ubuntu-18.04 tesla






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 15 '18 at 8:44







      Quoc Tin Phan

















      asked Nov 15 '18 at 0:19









      Quoc Tin PhanQuoc Tin Phan

      13




      13






















          1 Answer
          1






          active

          oldest

          votes


















          0














          Do you think your problem might be somehow connected to Compute Capability? The problem is described in here.



          You can check them when you run deviceQuery.exe. Here is a thread about where to find it on windows distribution of CUDA package.






          share|improve this answer























          • I was able to solve it. The problem was the mismatch between NVIDIA-SMI version and Driver Version (here). Thank you!

            – Quoc Tin Phan
            Jan 2 at 16:31











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53310724%2fwhy-does-tensorflow-1-11-0-return-cuda-error-not-supported%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          Do you think your problem might be somehow connected to Compute Capability? The problem is described in here.



          You can check them when you run deviceQuery.exe. Here is a thread about where to find it on windows distribution of CUDA package.






          share|improve this answer























          • I was able to solve it. The problem was the mismatch between NVIDIA-SMI version and Driver Version (here). Thank you!

            – Quoc Tin Phan
            Jan 2 at 16:31
















          0














          Do you think your problem might be somehow connected to Compute Capability? The problem is described in here.



          You can check them when you run deviceQuery.exe. Here is a thread about where to find it on windows distribution of CUDA package.






          share|improve this answer























          • I was able to solve it. The problem was the mismatch between NVIDIA-SMI version and Driver Version (here). Thank you!

            – Quoc Tin Phan
            Jan 2 at 16:31














          0












          0








          0







          Do you think your problem might be somehow connected to Compute Capability? The problem is described in here.



          You can check them when you run deviceQuery.exe. Here is a thread about where to find it on windows distribution of CUDA package.






          share|improve this answer













          Do you think your problem might be somehow connected to Compute Capability? The problem is described in here.



          You can check them when you run deviceQuery.exe. Here is a thread about where to find it on windows distribution of CUDA package.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Dec 15 '18 at 21:58









          Renard KorzeniowskiRenard Korzeniowski

          1066




          1066












          • I was able to solve it. The problem was the mismatch between NVIDIA-SMI version and Driver Version (here). Thank you!

            – Quoc Tin Phan
            Jan 2 at 16:31


















          • I was able to solve it. The problem was the mismatch between NVIDIA-SMI version and Driver Version (here). Thank you!

            – Quoc Tin Phan
            Jan 2 at 16:31

















          I was able to solve it. The problem was the mismatch between NVIDIA-SMI version and Driver Version (here). Thank you!

          – Quoc Tin Phan
          Jan 2 at 16:31






          I was able to solve it. The problem was the mismatch between NVIDIA-SMI version and Driver Version (here). Thank you!

          – Quoc Tin Phan
          Jan 2 at 16:31




















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53310724%2fwhy-does-tensorflow-1-11-0-return-cuda-error-not-supported%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How to how show current date and time by default on contact form 7 in WordPress without taking input from user in datetimepicker

          Syphilis

          Darth Vader #20