Kubernetes: customizing Pod scheduling and Volume scheduling










1














I'm trying to use Kubernetes to manage a scenario where I need to run several instances of an application (that is, several Pods). These are my requirements:



  1. When I need to scale up my application, I want to deploy one single Pod on a specific Node (not a random one).

  2. When I need to scale down my application, I want to remove a specific Pod from a specific Node (not a random one).

  3. When a new Pod is deployed, I want it to mount a specific PersistentVolume (not a random one) that I have manually provisioned.

  4. After a Pod has been deleted, I want its PersistentVolume to be re-usable by a different Pod.

So far, I used this naive solution to do all of the above: every time I needed to create a new instance of my application, I created one new Deployment (with exactly one replica) and one PersistentVolumeClaim. So for example, if I need five instances of my application, then I need five Deployments. Though, this solution is not very scalable and it doesn't exploit the full potential of Kubernetes.



I think it would be a lot smarter to use one single template for all the Pods, but I'm not sure whether I should use a Deployment or a Statefulset.



I've been experimenting with Labels and Node Affinity, and I found out that I can satisfy requirement 1, but I cannot satisfy requirement 2 this way. In order to satisfy requirement 2, would it be possible to delete a specific Pod by writing my own custom scheduler?



I don't understand how Kubernetes decides to tie a specific PersistentVolume to a specific PersistentVolumeClaim. Is there a sort of volume scheduler? Can I customize it somehow? This way, every time a new Pod is created, I'd be able to tie it to a specific volume.










share|improve this question


























    1














    I'm trying to use Kubernetes to manage a scenario where I need to run several instances of an application (that is, several Pods). These are my requirements:



    1. When I need to scale up my application, I want to deploy one single Pod on a specific Node (not a random one).

    2. When I need to scale down my application, I want to remove a specific Pod from a specific Node (not a random one).

    3. When a new Pod is deployed, I want it to mount a specific PersistentVolume (not a random one) that I have manually provisioned.

    4. After a Pod has been deleted, I want its PersistentVolume to be re-usable by a different Pod.

    So far, I used this naive solution to do all of the above: every time I needed to create a new instance of my application, I created one new Deployment (with exactly one replica) and one PersistentVolumeClaim. So for example, if I need five instances of my application, then I need five Deployments. Though, this solution is not very scalable and it doesn't exploit the full potential of Kubernetes.



    I think it would be a lot smarter to use one single template for all the Pods, but I'm not sure whether I should use a Deployment or a Statefulset.



    I've been experimenting with Labels and Node Affinity, and I found out that I can satisfy requirement 1, but I cannot satisfy requirement 2 this way. In order to satisfy requirement 2, would it be possible to delete a specific Pod by writing my own custom scheduler?



    I don't understand how Kubernetes decides to tie a specific PersistentVolume to a specific PersistentVolumeClaim. Is there a sort of volume scheduler? Can I customize it somehow? This way, every time a new Pod is created, I'd be able to tie it to a specific volume.










    share|improve this question
























      1












      1








      1







      I'm trying to use Kubernetes to manage a scenario where I need to run several instances of an application (that is, several Pods). These are my requirements:



      1. When I need to scale up my application, I want to deploy one single Pod on a specific Node (not a random one).

      2. When I need to scale down my application, I want to remove a specific Pod from a specific Node (not a random one).

      3. When a new Pod is deployed, I want it to mount a specific PersistentVolume (not a random one) that I have manually provisioned.

      4. After a Pod has been deleted, I want its PersistentVolume to be re-usable by a different Pod.

      So far, I used this naive solution to do all of the above: every time I needed to create a new instance of my application, I created one new Deployment (with exactly one replica) and one PersistentVolumeClaim. So for example, if I need five instances of my application, then I need five Deployments. Though, this solution is not very scalable and it doesn't exploit the full potential of Kubernetes.



      I think it would be a lot smarter to use one single template for all the Pods, but I'm not sure whether I should use a Deployment or a Statefulset.



      I've been experimenting with Labels and Node Affinity, and I found out that I can satisfy requirement 1, but I cannot satisfy requirement 2 this way. In order to satisfy requirement 2, would it be possible to delete a specific Pod by writing my own custom scheduler?



      I don't understand how Kubernetes decides to tie a specific PersistentVolume to a specific PersistentVolumeClaim. Is there a sort of volume scheduler? Can I customize it somehow? This way, every time a new Pod is created, I'd be able to tie it to a specific volume.










      share|improve this question













      I'm trying to use Kubernetes to manage a scenario where I need to run several instances of an application (that is, several Pods). These are my requirements:



      1. When I need to scale up my application, I want to deploy one single Pod on a specific Node (not a random one).

      2. When I need to scale down my application, I want to remove a specific Pod from a specific Node (not a random one).

      3. When a new Pod is deployed, I want it to mount a specific PersistentVolume (not a random one) that I have manually provisioned.

      4. After a Pod has been deleted, I want its PersistentVolume to be re-usable by a different Pod.

      So far, I used this naive solution to do all of the above: every time I needed to create a new instance of my application, I created one new Deployment (with exactly one replica) and one PersistentVolumeClaim. So for example, if I need five instances of my application, then I need five Deployments. Though, this solution is not very scalable and it doesn't exploit the full potential of Kubernetes.



      I think it would be a lot smarter to use one single template for all the Pods, but I'm not sure whether I should use a Deployment or a Statefulset.



      I've been experimenting with Labels and Node Affinity, and I found out that I can satisfy requirement 1, but I cannot satisfy requirement 2 this way. In order to satisfy requirement 2, would it be possible to delete a specific Pod by writing my own custom scheduler?



      I don't understand how Kubernetes decides to tie a specific PersistentVolume to a specific PersistentVolumeClaim. Is there a sort of volume scheduler? Can I customize it somehow? This way, every time a new Pod is created, I'd be able to tie it to a specific volume.







      kubernetes scheduling volume persistent






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 11 '18 at 21:59









      MikiTesi

      164




      164






















          1 Answer
          1






          active

          oldest

          votes


















          1














          There may be a good reason for these requirements so I'm not going to try to convince you that it may not be a good idea to use Kubernetes for this...



          Yes - with nodeSelector using labels, node affinity, and anti-affinity rules, pods can be scheduled on "appropriate" nodes.



          Static Pods may be something close to what you are looking for. I've never used static pods/bare pods on Kubernetes...they kind of don't (to quote something from the question) "...exploit the full potential of Kubernetes" ;-)



          Otherwise, here is what I think will work with out-of-the-box constructs for the four requirements:



          Use Deployment like you have - this will give you requirements #1 and #2. I don't believe requirement #2 (nor #1, actually) can be satisfied with StatefulSet. Neither with a ReplicaSet.



          Use statically provisioned PVs and selector(s) to (quote) "...tie a specific PersistentVolume to a specific PersistentVolumeClaim" for requirement #3.



          Then requirement #4 will be possible - just make sure the PVs use the proper reclaim policy.






          share|improve this answer




















          • By using a Deployment (like I'm trying to do now), I can satisfy #1 by using Labels and Node Affinity, but these are only considered at schedule-time, not at execution-time. This means that if I delete a label from a specific Node, the Pod running on that Node will not be deleted (failing to satisfy #2). Or am I missing something here? I have already tried deleting a Label and reducing the replicas number by one, but it didn't work as intended. Moreover, a Deployment constitutes a single "template" for all of the Pods. Can I specify a single PVC for each Pod of a Deployment? If so, how?
            – MikiTesi
            Nov 12 '18 at 8:39










          • I'm sorry if I wasn't clear. By saying "...Use Deployment like you have [been]" I meant what you were describing as "...one new Deployment (with exactly one replica) and one PersistentVolumeClaim" for every new instance of the application.
            – apisim
            Nov 12 '18 at 13:48










          • Oh ok. Yes, I've managed to make it work that way, but I would still like to improve my solution now :)
            – MikiTesi
            Nov 12 '18 at 14:41










          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53253665%2fkubernetes-customizing-pod-scheduling-and-volume-scheduling%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          There may be a good reason for these requirements so I'm not going to try to convince you that it may not be a good idea to use Kubernetes for this...



          Yes - with nodeSelector using labels, node affinity, and anti-affinity rules, pods can be scheduled on "appropriate" nodes.



          Static Pods may be something close to what you are looking for. I've never used static pods/bare pods on Kubernetes...they kind of don't (to quote something from the question) "...exploit the full potential of Kubernetes" ;-)



          Otherwise, here is what I think will work with out-of-the-box constructs for the four requirements:



          Use Deployment like you have - this will give you requirements #1 and #2. I don't believe requirement #2 (nor #1, actually) can be satisfied with StatefulSet. Neither with a ReplicaSet.



          Use statically provisioned PVs and selector(s) to (quote) "...tie a specific PersistentVolume to a specific PersistentVolumeClaim" for requirement #3.



          Then requirement #4 will be possible - just make sure the PVs use the proper reclaim policy.






          share|improve this answer




















          • By using a Deployment (like I'm trying to do now), I can satisfy #1 by using Labels and Node Affinity, but these are only considered at schedule-time, not at execution-time. This means that if I delete a label from a specific Node, the Pod running on that Node will not be deleted (failing to satisfy #2). Or am I missing something here? I have already tried deleting a Label and reducing the replicas number by one, but it didn't work as intended. Moreover, a Deployment constitutes a single "template" for all of the Pods. Can I specify a single PVC for each Pod of a Deployment? If so, how?
            – MikiTesi
            Nov 12 '18 at 8:39










          • I'm sorry if I wasn't clear. By saying "...Use Deployment like you have [been]" I meant what you were describing as "...one new Deployment (with exactly one replica) and one PersistentVolumeClaim" for every new instance of the application.
            – apisim
            Nov 12 '18 at 13:48










          • Oh ok. Yes, I've managed to make it work that way, but I would still like to improve my solution now :)
            – MikiTesi
            Nov 12 '18 at 14:41















          1














          There may be a good reason for these requirements so I'm not going to try to convince you that it may not be a good idea to use Kubernetes for this...



          Yes - with nodeSelector using labels, node affinity, and anti-affinity rules, pods can be scheduled on "appropriate" nodes.



          Static Pods may be something close to what you are looking for. I've never used static pods/bare pods on Kubernetes...they kind of don't (to quote something from the question) "...exploit the full potential of Kubernetes" ;-)



          Otherwise, here is what I think will work with out-of-the-box constructs for the four requirements:



          Use Deployment like you have - this will give you requirements #1 and #2. I don't believe requirement #2 (nor #1, actually) can be satisfied with StatefulSet. Neither with a ReplicaSet.



          Use statically provisioned PVs and selector(s) to (quote) "...tie a specific PersistentVolume to a specific PersistentVolumeClaim" for requirement #3.



          Then requirement #4 will be possible - just make sure the PVs use the proper reclaim policy.






          share|improve this answer




















          • By using a Deployment (like I'm trying to do now), I can satisfy #1 by using Labels and Node Affinity, but these are only considered at schedule-time, not at execution-time. This means that if I delete a label from a specific Node, the Pod running on that Node will not be deleted (failing to satisfy #2). Or am I missing something here? I have already tried deleting a Label and reducing the replicas number by one, but it didn't work as intended. Moreover, a Deployment constitutes a single "template" for all of the Pods. Can I specify a single PVC for each Pod of a Deployment? If so, how?
            – MikiTesi
            Nov 12 '18 at 8:39










          • I'm sorry if I wasn't clear. By saying "...Use Deployment like you have [been]" I meant what you were describing as "...one new Deployment (with exactly one replica) and one PersistentVolumeClaim" for every new instance of the application.
            – apisim
            Nov 12 '18 at 13:48










          • Oh ok. Yes, I've managed to make it work that way, but I would still like to improve my solution now :)
            – MikiTesi
            Nov 12 '18 at 14:41













          1












          1








          1






          There may be a good reason for these requirements so I'm not going to try to convince you that it may not be a good idea to use Kubernetes for this...



          Yes - with nodeSelector using labels, node affinity, and anti-affinity rules, pods can be scheduled on "appropriate" nodes.



          Static Pods may be something close to what you are looking for. I've never used static pods/bare pods on Kubernetes...they kind of don't (to quote something from the question) "...exploit the full potential of Kubernetes" ;-)



          Otherwise, here is what I think will work with out-of-the-box constructs for the four requirements:



          Use Deployment like you have - this will give you requirements #1 and #2. I don't believe requirement #2 (nor #1, actually) can be satisfied with StatefulSet. Neither with a ReplicaSet.



          Use statically provisioned PVs and selector(s) to (quote) "...tie a specific PersistentVolume to a specific PersistentVolumeClaim" for requirement #3.



          Then requirement #4 will be possible - just make sure the PVs use the proper reclaim policy.






          share|improve this answer












          There may be a good reason for these requirements so I'm not going to try to convince you that it may not be a good idea to use Kubernetes for this...



          Yes - with nodeSelector using labels, node affinity, and anti-affinity rules, pods can be scheduled on "appropriate" nodes.



          Static Pods may be something close to what you are looking for. I've never used static pods/bare pods on Kubernetes...they kind of don't (to quote something from the question) "...exploit the full potential of Kubernetes" ;-)



          Otherwise, here is what I think will work with out-of-the-box constructs for the four requirements:



          Use Deployment like you have - this will give you requirements #1 and #2. I don't believe requirement #2 (nor #1, actually) can be satisfied with StatefulSet. Neither with a ReplicaSet.



          Use statically provisioned PVs and selector(s) to (quote) "...tie a specific PersistentVolume to a specific PersistentVolumeClaim" for requirement #3.



          Then requirement #4 will be possible - just make sure the PVs use the proper reclaim policy.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 12 '18 at 2:05









          apisim

          4576




          4576











          • By using a Deployment (like I'm trying to do now), I can satisfy #1 by using Labels and Node Affinity, but these are only considered at schedule-time, not at execution-time. This means that if I delete a label from a specific Node, the Pod running on that Node will not be deleted (failing to satisfy #2). Or am I missing something here? I have already tried deleting a Label and reducing the replicas number by one, but it didn't work as intended. Moreover, a Deployment constitutes a single "template" for all of the Pods. Can I specify a single PVC for each Pod of a Deployment? If so, how?
            – MikiTesi
            Nov 12 '18 at 8:39










          • I'm sorry if I wasn't clear. By saying "...Use Deployment like you have [been]" I meant what you were describing as "...one new Deployment (with exactly one replica) and one PersistentVolumeClaim" for every new instance of the application.
            – apisim
            Nov 12 '18 at 13:48










          • Oh ok. Yes, I've managed to make it work that way, but I would still like to improve my solution now :)
            – MikiTesi
            Nov 12 '18 at 14:41
















          • By using a Deployment (like I'm trying to do now), I can satisfy #1 by using Labels and Node Affinity, but these are only considered at schedule-time, not at execution-time. This means that if I delete a label from a specific Node, the Pod running on that Node will not be deleted (failing to satisfy #2). Or am I missing something here? I have already tried deleting a Label and reducing the replicas number by one, but it didn't work as intended. Moreover, a Deployment constitutes a single "template" for all of the Pods. Can I specify a single PVC for each Pod of a Deployment? If so, how?
            – MikiTesi
            Nov 12 '18 at 8:39










          • I'm sorry if I wasn't clear. By saying "...Use Deployment like you have [been]" I meant what you were describing as "...one new Deployment (with exactly one replica) and one PersistentVolumeClaim" for every new instance of the application.
            – apisim
            Nov 12 '18 at 13:48










          • Oh ok. Yes, I've managed to make it work that way, but I would still like to improve my solution now :)
            – MikiTesi
            Nov 12 '18 at 14:41















          By using a Deployment (like I'm trying to do now), I can satisfy #1 by using Labels and Node Affinity, but these are only considered at schedule-time, not at execution-time. This means that if I delete a label from a specific Node, the Pod running on that Node will not be deleted (failing to satisfy #2). Or am I missing something here? I have already tried deleting a Label and reducing the replicas number by one, but it didn't work as intended. Moreover, a Deployment constitutes a single "template" for all of the Pods. Can I specify a single PVC for each Pod of a Deployment? If so, how?
          – MikiTesi
          Nov 12 '18 at 8:39




          By using a Deployment (like I'm trying to do now), I can satisfy #1 by using Labels and Node Affinity, but these are only considered at schedule-time, not at execution-time. This means that if I delete a label from a specific Node, the Pod running on that Node will not be deleted (failing to satisfy #2). Or am I missing something here? I have already tried deleting a Label and reducing the replicas number by one, but it didn't work as intended. Moreover, a Deployment constitutes a single "template" for all of the Pods. Can I specify a single PVC for each Pod of a Deployment? If so, how?
          – MikiTesi
          Nov 12 '18 at 8:39












          I'm sorry if I wasn't clear. By saying "...Use Deployment like you have [been]" I meant what you were describing as "...one new Deployment (with exactly one replica) and one PersistentVolumeClaim" for every new instance of the application.
          – apisim
          Nov 12 '18 at 13:48




          I'm sorry if I wasn't clear. By saying "...Use Deployment like you have [been]" I meant what you were describing as "...one new Deployment (with exactly one replica) and one PersistentVolumeClaim" for every new instance of the application.
          – apisim
          Nov 12 '18 at 13:48












          Oh ok. Yes, I've managed to make it work that way, but I would still like to improve my solution now :)
          – MikiTesi
          Nov 12 '18 at 14:41




          Oh ok. Yes, I've managed to make it work that way, but I would still like to improve my solution now :)
          – MikiTesi
          Nov 12 '18 at 14:41

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53253665%2fkubernetes-customizing-pod-scheduling-and-volume-scheduling%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Use pre created SQLite database for Android project in kotlin

          Darth Vader #20

          Ondo