How to allow an access to a Compute Engine VM in Airflow (Google Cloud Composer )
I try to run a bash command in this pattern ssh user@host "my bash command"
using BashOperator in Airflow. This works locally because I have my publickey in the target machine.
But I would like to run this command in Google Cloud Composer, which is Airflow + Google Kubernetes Engine. I understood that the Airflow's core program is running in 3 pods named according to this pattern airflow-worker-xxxxxxxxx-yyyyy
.
A naive solution was to create an ssh keys for each pod and add it's public key to the target machine in Compute Engine. The solution worked until today, somehow my 3 pods have changed so my ssh keys are gone. It was definitely not the best solution.
I have 2 questions:
- Why Google cloud composer have changed my pods ?
- How can I resolve my issue ?
airflow google-kubernetes-engine google-cloud-composer
add a comment |
I try to run a bash command in this pattern ssh user@host "my bash command"
using BashOperator in Airflow. This works locally because I have my publickey in the target machine.
But I would like to run this command in Google Cloud Composer, which is Airflow + Google Kubernetes Engine. I understood that the Airflow's core program is running in 3 pods named according to this pattern airflow-worker-xxxxxxxxx-yyyyy
.
A naive solution was to create an ssh keys for each pod and add it's public key to the target machine in Compute Engine. The solution worked until today, somehow my 3 pods have changed so my ssh keys are gone. It was definitely not the best solution.
I have 2 questions:
- Why Google cloud composer have changed my pods ?
- How can I resolve my issue ?
airflow google-kubernetes-engine google-cloud-composer
add a comment |
I try to run a bash command in this pattern ssh user@host "my bash command"
using BashOperator in Airflow. This works locally because I have my publickey in the target machine.
But I would like to run this command in Google Cloud Composer, which is Airflow + Google Kubernetes Engine. I understood that the Airflow's core program is running in 3 pods named according to this pattern airflow-worker-xxxxxxxxx-yyyyy
.
A naive solution was to create an ssh keys for each pod and add it's public key to the target machine in Compute Engine. The solution worked until today, somehow my 3 pods have changed so my ssh keys are gone. It was definitely not the best solution.
I have 2 questions:
- Why Google cloud composer have changed my pods ?
- How can I resolve my issue ?
airflow google-kubernetes-engine google-cloud-composer
I try to run a bash command in this pattern ssh user@host "my bash command"
using BashOperator in Airflow. This works locally because I have my publickey in the target machine.
But I would like to run this command in Google Cloud Composer, which is Airflow + Google Kubernetes Engine. I understood that the Airflow's core program is running in 3 pods named according to this pattern airflow-worker-xxxxxxxxx-yyyyy
.
A naive solution was to create an ssh keys for each pod and add it's public key to the target machine in Compute Engine. The solution worked until today, somehow my 3 pods have changed so my ssh keys are gone. It was definitely not the best solution.
I have 2 questions:
- Why Google cloud composer have changed my pods ?
- How can I resolve my issue ?
airflow google-kubernetes-engine google-cloud-composer
airflow google-kubernetes-engine google-cloud-composer
edited Nov 14 '18 at 9:52
Ismail Addou
asked Nov 14 '18 at 9:45
Ismail AddouIsmail Addou
33118
33118
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
Pods restarts are not specifics to Composer. I would say this is more related to kubernetes itself:
Pods aren’t intended to be treated as durable entities.
So in general pods can be restarted for different reasons, so you shouldn't rely on any changes that you make on them.
How can I resolve my issue ?
You can solve this taking into account that Cloud Composer creates a Cloud Storage bucket and links it to your environment. You can access the different folders of this bucket from any of your workers. So you could store your key (you can use only one key-pair) in "gs://bucket-name/data", which you can access through the mapped directory "/home/airflow/gcs/data". Docs here
The mapped directory use gcsfuse to mount the distant bucket in a folder, but I cannot change ssh key pair permissions to 700, I think because gcsfuse don't allow this operation, is there a way to do so ?
– Ismail Addou
Nov 14 '18 at 14:13
You could add some more complexity to your bash_command and copy the file(s) locally, and do the permission changes as needed.
– VictorGGl
Nov 15 '18 at 17:02
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53297149%2fhow-to-allow-an-access-to-a-compute-engine-vm-in-airflow-google-cloud-composer%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Pods restarts are not specifics to Composer. I would say this is more related to kubernetes itself:
Pods aren’t intended to be treated as durable entities.
So in general pods can be restarted for different reasons, so you shouldn't rely on any changes that you make on them.
How can I resolve my issue ?
You can solve this taking into account that Cloud Composer creates a Cloud Storage bucket and links it to your environment. You can access the different folders of this bucket from any of your workers. So you could store your key (you can use only one key-pair) in "gs://bucket-name/data", which you can access through the mapped directory "/home/airflow/gcs/data". Docs here
The mapped directory use gcsfuse to mount the distant bucket in a folder, but I cannot change ssh key pair permissions to 700, I think because gcsfuse don't allow this operation, is there a way to do so ?
– Ismail Addou
Nov 14 '18 at 14:13
You could add some more complexity to your bash_command and copy the file(s) locally, and do the permission changes as needed.
– VictorGGl
Nov 15 '18 at 17:02
add a comment |
Pods restarts are not specifics to Composer. I would say this is more related to kubernetes itself:
Pods aren’t intended to be treated as durable entities.
So in general pods can be restarted for different reasons, so you shouldn't rely on any changes that you make on them.
How can I resolve my issue ?
You can solve this taking into account that Cloud Composer creates a Cloud Storage bucket and links it to your environment. You can access the different folders of this bucket from any of your workers. So you could store your key (you can use only one key-pair) in "gs://bucket-name/data", which you can access through the mapped directory "/home/airflow/gcs/data". Docs here
The mapped directory use gcsfuse to mount the distant bucket in a folder, but I cannot change ssh key pair permissions to 700, I think because gcsfuse don't allow this operation, is there a way to do so ?
– Ismail Addou
Nov 14 '18 at 14:13
You could add some more complexity to your bash_command and copy the file(s) locally, and do the permission changes as needed.
– VictorGGl
Nov 15 '18 at 17:02
add a comment |
Pods restarts are not specifics to Composer. I would say this is more related to kubernetes itself:
Pods aren’t intended to be treated as durable entities.
So in general pods can be restarted for different reasons, so you shouldn't rely on any changes that you make on them.
How can I resolve my issue ?
You can solve this taking into account that Cloud Composer creates a Cloud Storage bucket and links it to your environment. You can access the different folders of this bucket from any of your workers. So you could store your key (you can use only one key-pair) in "gs://bucket-name/data", which you can access through the mapped directory "/home/airflow/gcs/data". Docs here
Pods restarts are not specifics to Composer. I would say this is more related to kubernetes itself:
Pods aren’t intended to be treated as durable entities.
So in general pods can be restarted for different reasons, so you shouldn't rely on any changes that you make on them.
How can I resolve my issue ?
You can solve this taking into account that Cloud Composer creates a Cloud Storage bucket and links it to your environment. You can access the different folders of this bucket from any of your workers. So you could store your key (you can use only one key-pair) in "gs://bucket-name/data", which you can access through the mapped directory "/home/airflow/gcs/data". Docs here
edited Nov 14 '18 at 12:28
answered Nov 14 '18 at 12:01
VictorGGlVictorGGl
1,198412
1,198412
The mapped directory use gcsfuse to mount the distant bucket in a folder, but I cannot change ssh key pair permissions to 700, I think because gcsfuse don't allow this operation, is there a way to do so ?
– Ismail Addou
Nov 14 '18 at 14:13
You could add some more complexity to your bash_command and copy the file(s) locally, and do the permission changes as needed.
– VictorGGl
Nov 15 '18 at 17:02
add a comment |
The mapped directory use gcsfuse to mount the distant bucket in a folder, but I cannot change ssh key pair permissions to 700, I think because gcsfuse don't allow this operation, is there a way to do so ?
– Ismail Addou
Nov 14 '18 at 14:13
You could add some more complexity to your bash_command and copy the file(s) locally, and do the permission changes as needed.
– VictorGGl
Nov 15 '18 at 17:02
The mapped directory use gcsfuse to mount the distant bucket in a folder, but I cannot change ssh key pair permissions to 700, I think because gcsfuse don't allow this operation, is there a way to do so ?
– Ismail Addou
Nov 14 '18 at 14:13
The mapped directory use gcsfuse to mount the distant bucket in a folder, but I cannot change ssh key pair permissions to 700, I think because gcsfuse don't allow this operation, is there a way to do so ?
– Ismail Addou
Nov 14 '18 at 14:13
You could add some more complexity to your bash_command and copy the file(s) locally, and do the permission changes as needed.
– VictorGGl
Nov 15 '18 at 17:02
You could add some more complexity to your bash_command and copy the file(s) locally, and do the permission changes as needed.
– VictorGGl
Nov 15 '18 at 17:02
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53297149%2fhow-to-allow-an-access-to-a-compute-engine-vm-in-airflow-google-cloud-composer%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown