I want to use Tensorflow on CPU for everything except back propagation
up vote
1
down vote
favorite
I recently built my first TensorFlow model (converted from hand-coded python). I'm using tensorflow-gpu, but I only want to use GPU for backprop during training. For everything else I want to use CPU. I've seen this article showing how to force CPU use on a system that will use GPU by default. However, you have to specify every single operation where you want to force CPU use. Instead I'd like to do the opposite. I'd like to default to CPU use, but then specify GPU just for the backprop that I do during training. Is there a way to do that?
Update
Looks like things are just going to run slower over tensorflow because of how my model and scenario are built at present. I tried using a different environment that just uses regular (non-gpu) tensorflow, and it still runs significantly slower than hand-coded python. The reason for this, I suspect, is it's a reinforcement learning model that plays checkers (see below) and makes one single forward prop "prediction" at a time as it plays against a computer opponent. At the time I designed the architecture, that made sense. But it's not very efficient to do predictions one at a time, and less so with whatever overhead there is for tensorflow.
So, now I'm thinking that I'm going to need to change the game playing architecture to play, say, a thousand games simultaneously and run a thousand forward prop moves in a batch. But, man, changing the architecture now is going to be tricky at best.
python tensorflow
add a comment |
up vote
1
down vote
favorite
I recently built my first TensorFlow model (converted from hand-coded python). I'm using tensorflow-gpu, but I only want to use GPU for backprop during training. For everything else I want to use CPU. I've seen this article showing how to force CPU use on a system that will use GPU by default. However, you have to specify every single operation where you want to force CPU use. Instead I'd like to do the opposite. I'd like to default to CPU use, but then specify GPU just for the backprop that I do during training. Is there a way to do that?
Update
Looks like things are just going to run slower over tensorflow because of how my model and scenario are built at present. I tried using a different environment that just uses regular (non-gpu) tensorflow, and it still runs significantly slower than hand-coded python. The reason for this, I suspect, is it's a reinforcement learning model that plays checkers (see below) and makes one single forward prop "prediction" at a time as it plays against a computer opponent. At the time I designed the architecture, that made sense. But it's not very efficient to do predictions one at a time, and less so with whatever overhead there is for tensorflow.
So, now I'm thinking that I'm going to need to change the game playing architecture to play, say, a thousand games simultaneously and run a thousand forward prop moves in a batch. But, man, changing the architecture now is going to be tricky at best.
python tensorflow
1
Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
– Mateen Ulhaq
Nov 10 at 2:29
1
Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
– tobogranyte
Nov 10 at 12:26
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I recently built my first TensorFlow model (converted from hand-coded python). I'm using tensorflow-gpu, but I only want to use GPU for backprop during training. For everything else I want to use CPU. I've seen this article showing how to force CPU use on a system that will use GPU by default. However, you have to specify every single operation where you want to force CPU use. Instead I'd like to do the opposite. I'd like to default to CPU use, but then specify GPU just for the backprop that I do during training. Is there a way to do that?
Update
Looks like things are just going to run slower over tensorflow because of how my model and scenario are built at present. I tried using a different environment that just uses regular (non-gpu) tensorflow, and it still runs significantly slower than hand-coded python. The reason for this, I suspect, is it's a reinforcement learning model that plays checkers (see below) and makes one single forward prop "prediction" at a time as it plays against a computer opponent. At the time I designed the architecture, that made sense. But it's not very efficient to do predictions one at a time, and less so with whatever overhead there is for tensorflow.
So, now I'm thinking that I'm going to need to change the game playing architecture to play, say, a thousand games simultaneously and run a thousand forward prop moves in a batch. But, man, changing the architecture now is going to be tricky at best.
python tensorflow
I recently built my first TensorFlow model (converted from hand-coded python). I'm using tensorflow-gpu, but I only want to use GPU for backprop during training. For everything else I want to use CPU. I've seen this article showing how to force CPU use on a system that will use GPU by default. However, you have to specify every single operation where you want to force CPU use. Instead I'd like to do the opposite. I'd like to default to CPU use, but then specify GPU just for the backprop that I do during training. Is there a way to do that?
Update
Looks like things are just going to run slower over tensorflow because of how my model and scenario are built at present. I tried using a different environment that just uses regular (non-gpu) tensorflow, and it still runs significantly slower than hand-coded python. The reason for this, I suspect, is it's a reinforcement learning model that plays checkers (see below) and makes one single forward prop "prediction" at a time as it plays against a computer opponent. At the time I designed the architecture, that made sense. But it's not very efficient to do predictions one at a time, and less so with whatever overhead there is for tensorflow.
So, now I'm thinking that I'm going to need to change the game playing architecture to play, say, a thousand games simultaneously and run a thousand forward prop moves in a batch. But, man, changing the architecture now is going to be tricky at best.
python tensorflow
python tensorflow
edited Nov 10 at 18:14
asked Nov 10 at 2:19
tobogranyte
519417
519417
1
Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
– Mateen Ulhaq
Nov 10 at 2:29
1
Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
– tobogranyte
Nov 10 at 12:26
add a comment |
1
Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
– Mateen Ulhaq
Nov 10 at 2:29
1
Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
– tobogranyte
Nov 10 at 12:26
1
1
Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
– Mateen Ulhaq
Nov 10 at 2:29
Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
– Mateen Ulhaq
Nov 10 at 2:29
1
1
Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
– tobogranyte
Nov 10 at 12:26
Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
– tobogranyte
Nov 10 at 12:26
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
TensorFlow lets you control device placement with the tf.device
context manager.
So for example to run some code on the CPU do
with tf.device('cpu:0'):
<your code goes here>
Similarly to force GPU usage.
Instead of always running your forward pass on the CPU though you're better off making two graphs: a forward-only cpu-only graph to be used when rolling out the policy and a gpu-only forward-and-backward graph to be used when training.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
TensorFlow lets you control device placement with the tf.device
context manager.
So for example to run some code on the CPU do
with tf.device('cpu:0'):
<your code goes here>
Similarly to force GPU usage.
Instead of always running your forward pass on the CPU though you're better off making two graphs: a forward-only cpu-only graph to be used when rolling out the policy and a gpu-only forward-and-backward graph to be used when training.
add a comment |
up vote
0
down vote
TensorFlow lets you control device placement with the tf.device
context manager.
So for example to run some code on the CPU do
with tf.device('cpu:0'):
<your code goes here>
Similarly to force GPU usage.
Instead of always running your forward pass on the CPU though you're better off making two graphs: a forward-only cpu-only graph to be used when rolling out the policy and a gpu-only forward-and-backward graph to be used when training.
add a comment |
up vote
0
down vote
up vote
0
down vote
TensorFlow lets you control device placement with the tf.device
context manager.
So for example to run some code on the CPU do
with tf.device('cpu:0'):
<your code goes here>
Similarly to force GPU usage.
Instead of always running your forward pass on the CPU though you're better off making two graphs: a forward-only cpu-only graph to be used when rolling out the policy and a gpu-only forward-and-backward graph to be used when training.
TensorFlow lets you control device placement with the tf.device
context manager.
So for example to run some code on the CPU do
with tf.device('cpu:0'):
<your code goes here>
Similarly to force GPU usage.
Instead of always running your forward pass on the CPU though you're better off making two graphs: a forward-only cpu-only graph to be used when rolling out the policy and a gpu-only forward-and-backward graph to be used when training.
answered Nov 15 at 17:57
Alexandre Passos
4,0591917
4,0591917
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53235485%2fi-want-to-use-tensorflow-on-cpu-for-everything-except-back-propagation%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
– Mateen Ulhaq
Nov 10 at 2:29
1
Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
– tobogranyte
Nov 10 at 12:26