setup kubernetes worker node behind NAT










0















I have setup a kubernetes cluster using kubeadm.



Environment



  1. Master node installed in a PC with public IP.

  2. Worker node behind NAT address (the interface has local internal IP, but needs to be accessed using the public IP)

Status



The worker node is able to join the cluster and running



kubectl get nodes


the status of the node is ready.



Kubernetes can deploy and run pods on that node.



Problem



The problem that I have is that I'm not able to access the pods deployed on that node. For example, if I run



kubectl logs <pod-name>


where pod-name is the name of a pod deployed on the worker node, I have this error:



Error from server: Get https://192.168.0.17:10250/containerLogs/default/stage-bbcf4f47f-gtvrd/stage: dial tcp 192.168.0.17:10250: i/o timeout


because it is trying to use the local IP 192.168.0.17, which is not accessable externally.



I have seen that the node had this annotation:



flannel.alpha.coreos.com/public-ip: 192.168.0.17


So, I have tried to modify the annotation, setting the external IP, in this way:



flannel.alpha.coreos.com/public-ip: <my_externeal_ip>


and I see that the node is correctly annotated, but it is still using 192.168.0.17.



Is there something else that I have to setup in the worker node or in the cluster configuration?










share|improve this question


























    0















    I have setup a kubernetes cluster using kubeadm.



    Environment



    1. Master node installed in a PC with public IP.

    2. Worker node behind NAT address (the interface has local internal IP, but needs to be accessed using the public IP)

    Status



    The worker node is able to join the cluster and running



    kubectl get nodes


    the status of the node is ready.



    Kubernetes can deploy and run pods on that node.



    Problem



    The problem that I have is that I'm not able to access the pods deployed on that node. For example, if I run



    kubectl logs <pod-name>


    where pod-name is the name of a pod deployed on the worker node, I have this error:



    Error from server: Get https://192.168.0.17:10250/containerLogs/default/stage-bbcf4f47f-gtvrd/stage: dial tcp 192.168.0.17:10250: i/o timeout


    because it is trying to use the local IP 192.168.0.17, which is not accessable externally.



    I have seen that the node had this annotation:



    flannel.alpha.coreos.com/public-ip: 192.168.0.17


    So, I have tried to modify the annotation, setting the external IP, in this way:



    flannel.alpha.coreos.com/public-ip: <my_externeal_ip>


    and I see that the node is correctly annotated, but it is still using 192.168.0.17.



    Is there something else that I have to setup in the worker node or in the cluster configuration?










    share|improve this question
























      0












      0








      0








      I have setup a kubernetes cluster using kubeadm.



      Environment



      1. Master node installed in a PC with public IP.

      2. Worker node behind NAT address (the interface has local internal IP, but needs to be accessed using the public IP)

      Status



      The worker node is able to join the cluster and running



      kubectl get nodes


      the status of the node is ready.



      Kubernetes can deploy and run pods on that node.



      Problem



      The problem that I have is that I'm not able to access the pods deployed on that node. For example, if I run



      kubectl logs <pod-name>


      where pod-name is the name of a pod deployed on the worker node, I have this error:



      Error from server: Get https://192.168.0.17:10250/containerLogs/default/stage-bbcf4f47f-gtvrd/stage: dial tcp 192.168.0.17:10250: i/o timeout


      because it is trying to use the local IP 192.168.0.17, which is not accessable externally.



      I have seen that the node had this annotation:



      flannel.alpha.coreos.com/public-ip: 192.168.0.17


      So, I have tried to modify the annotation, setting the external IP, in this way:



      flannel.alpha.coreos.com/public-ip: <my_externeal_ip>


      and I see that the node is correctly annotated, but it is still using 192.168.0.17.



      Is there something else that I have to setup in the worker node or in the cluster configuration?










      share|improve this question














      I have setup a kubernetes cluster using kubeadm.



      Environment



      1. Master node installed in a PC with public IP.

      2. Worker node behind NAT address (the interface has local internal IP, but needs to be accessed using the public IP)

      Status



      The worker node is able to join the cluster and running



      kubectl get nodes


      the status of the node is ready.



      Kubernetes can deploy and run pods on that node.



      Problem



      The problem that I have is that I'm not able to access the pods deployed on that node. For example, if I run



      kubectl logs <pod-name>


      where pod-name is the name of a pod deployed on the worker node, I have this error:



      Error from server: Get https://192.168.0.17:10250/containerLogs/default/stage-bbcf4f47f-gtvrd/stage: dial tcp 192.168.0.17:10250: i/o timeout


      because it is trying to use the local IP 192.168.0.17, which is not accessable externally.



      I have seen that the node had this annotation:



      flannel.alpha.coreos.com/public-ip: 192.168.0.17


      So, I have tried to modify the annotation, setting the external IP, in this way:



      flannel.alpha.coreos.com/public-ip: <my_externeal_ip>


      and I see that the node is correctly annotated, but it is still using 192.168.0.17.



      Is there something else that I have to setup in the worker node or in the cluster configuration?







      kubernetes






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 12 '18 at 13:52









      DavideDavide

      81




      81






















          1 Answer
          1






          active

          oldest

          votes


















          1














          there were a metric boatload of Related questions in the sidebar, and I'm about 90% certain this is a FAQ, but can't be bothered to triage the Duplicate




          Is there something else that I have to setup in the worker node or in the cluster configuration?




          No, that situation is not a misconfiguration of your worker Node, nor your cluster configuration. It is just a side-effect of the way kubernetes handles Pod-centric traffic. It does mean that if you choose to go forward with that setup, you will not be able to use kubectl exec nor kubectl logs (and I think port-forward, too) since those commands do not send traffic through the API server, rather it directly contacts the kubelet port on the Node which hosts the Pod you are interacting with. That's primarily to offload the traffic from traveling through the API server, but can also be a scaling issue if you have a sufficiently large number of exec/log/port-foward/etc commands happening simultaneously, since TCP ports are not infinite.



          I think it is theoretically possible to have your workstation join the overlay network, since by definition it's not related to the outer network, but I don't have a ton of experience with trying to get an overlay to play nice-nice with NAT, so that's the "theoretically" part.



          I have personally gotten Wireguard to work across NAT, meaning you could VPN into your Node's network, but it was some gear turning, and is likely more trouble than it's worth.






          share|improve this answer






















            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53263635%2fsetup-kubernetes-worker-node-behind-nat%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            there were a metric boatload of Related questions in the sidebar, and I'm about 90% certain this is a FAQ, but can't be bothered to triage the Duplicate




            Is there something else that I have to setup in the worker node or in the cluster configuration?




            No, that situation is not a misconfiguration of your worker Node, nor your cluster configuration. It is just a side-effect of the way kubernetes handles Pod-centric traffic. It does mean that if you choose to go forward with that setup, you will not be able to use kubectl exec nor kubectl logs (and I think port-forward, too) since those commands do not send traffic through the API server, rather it directly contacts the kubelet port on the Node which hosts the Pod you are interacting with. That's primarily to offload the traffic from traveling through the API server, but can also be a scaling issue if you have a sufficiently large number of exec/log/port-foward/etc commands happening simultaneously, since TCP ports are not infinite.



            I think it is theoretically possible to have your workstation join the overlay network, since by definition it's not related to the outer network, but I don't have a ton of experience with trying to get an overlay to play nice-nice with NAT, so that's the "theoretically" part.



            I have personally gotten Wireguard to work across NAT, meaning you could VPN into your Node's network, but it was some gear turning, and is likely more trouble than it's worth.






            share|improve this answer



























              1














              there were a metric boatload of Related questions in the sidebar, and I'm about 90% certain this is a FAQ, but can't be bothered to triage the Duplicate




              Is there something else that I have to setup in the worker node or in the cluster configuration?




              No, that situation is not a misconfiguration of your worker Node, nor your cluster configuration. It is just a side-effect of the way kubernetes handles Pod-centric traffic. It does mean that if you choose to go forward with that setup, you will not be able to use kubectl exec nor kubectl logs (and I think port-forward, too) since those commands do not send traffic through the API server, rather it directly contacts the kubelet port on the Node which hosts the Pod you are interacting with. That's primarily to offload the traffic from traveling through the API server, but can also be a scaling issue if you have a sufficiently large number of exec/log/port-foward/etc commands happening simultaneously, since TCP ports are not infinite.



              I think it is theoretically possible to have your workstation join the overlay network, since by definition it's not related to the outer network, but I don't have a ton of experience with trying to get an overlay to play nice-nice with NAT, so that's the "theoretically" part.



              I have personally gotten Wireguard to work across NAT, meaning you could VPN into your Node's network, but it was some gear turning, and is likely more trouble than it's worth.






              share|improve this answer

























                1












                1








                1







                there were a metric boatload of Related questions in the sidebar, and I'm about 90% certain this is a FAQ, but can't be bothered to triage the Duplicate




                Is there something else that I have to setup in the worker node or in the cluster configuration?




                No, that situation is not a misconfiguration of your worker Node, nor your cluster configuration. It is just a side-effect of the way kubernetes handles Pod-centric traffic. It does mean that if you choose to go forward with that setup, you will not be able to use kubectl exec nor kubectl logs (and I think port-forward, too) since those commands do not send traffic through the API server, rather it directly contacts the kubelet port on the Node which hosts the Pod you are interacting with. That's primarily to offload the traffic from traveling through the API server, but can also be a scaling issue if you have a sufficiently large number of exec/log/port-foward/etc commands happening simultaneously, since TCP ports are not infinite.



                I think it is theoretically possible to have your workstation join the overlay network, since by definition it's not related to the outer network, but I don't have a ton of experience with trying to get an overlay to play nice-nice with NAT, so that's the "theoretically" part.



                I have personally gotten Wireguard to work across NAT, meaning you could VPN into your Node's network, but it was some gear turning, and is likely more trouble than it's worth.






                share|improve this answer













                there were a metric boatload of Related questions in the sidebar, and I'm about 90% certain this is a FAQ, but can't be bothered to triage the Duplicate




                Is there something else that I have to setup in the worker node or in the cluster configuration?




                No, that situation is not a misconfiguration of your worker Node, nor your cluster configuration. It is just a side-effect of the way kubernetes handles Pod-centric traffic. It does mean that if you choose to go forward with that setup, you will not be able to use kubectl exec nor kubectl logs (and I think port-forward, too) since those commands do not send traffic through the API server, rather it directly contacts the kubelet port on the Node which hosts the Pod you are interacting with. That's primarily to offload the traffic from traveling through the API server, but can also be a scaling issue if you have a sufficiently large number of exec/log/port-foward/etc commands happening simultaneously, since TCP ports are not infinite.



                I think it is theoretically possible to have your workstation join the overlay network, since by definition it's not related to the outer network, but I don't have a ton of experience with trying to get an overlay to play nice-nice with NAT, so that's the "theoretically" part.



                I have personally gotten Wireguard to work across NAT, meaning you could VPN into your Node's network, but it was some gear turning, and is likely more trouble than it's worth.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 13 '18 at 3:34









                Matthew L DanielMatthew L Daniel

                8,07612326




                8,07612326



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53263635%2fsetup-kubernetes-worker-node-behind-nat%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Use pre created SQLite database for Android project in kotlin

                    Darth Vader #20

                    Ondo