Suggestion to create a proxy API over Google Places autocomplete API









up vote
1
down vote

favorite












I am a mobile developer and have started using Google Place API for an autocomplete suggestion to find places in my app. But I have observed that Google Places API becomes costly when scaling. So i have made some optimization on the mobile side as follows:-



  1. Limit autocomplete for at least 3 character

  2. The threshold value to make API call for autocomplete is the 500millisecond difference between 2 characters typed

  3. Do a local caching of results with LRU mechanism

With all this optimization client side is good. But now I am also thinking to optimize from backend API side also. For this optimization, I will create a wrapper for google places auto-complete api with server-side caching. This caching will have a time-span of 30days as per Google Guidelines.



I need help to understand how can i design this?
Like what key and value combination i need to store autocomplete suggestion?
Should I use Redis or Hazelcast?
I am writing my Backend in Java with aws server using micro-service architecture.
Any already implemented solution where I can look into and learn.



Please help as I am a newbie Backend developer.










share|improve this question

























    up vote
    1
    down vote

    favorite












    I am a mobile developer and have started using Google Place API for an autocomplete suggestion to find places in my app. But I have observed that Google Places API becomes costly when scaling. So i have made some optimization on the mobile side as follows:-



    1. Limit autocomplete for at least 3 character

    2. The threshold value to make API call for autocomplete is the 500millisecond difference between 2 characters typed

    3. Do a local caching of results with LRU mechanism

    With all this optimization client side is good. But now I am also thinking to optimize from backend API side also. For this optimization, I will create a wrapper for google places auto-complete api with server-side caching. This caching will have a time-span of 30days as per Google Guidelines.



    I need help to understand how can i design this?
    Like what key and value combination i need to store autocomplete suggestion?
    Should I use Redis or Hazelcast?
    I am writing my Backend in Java with aws server using micro-service architecture.
    Any already implemented solution where I can look into and learn.



    Please help as I am a newbie Backend developer.










    share|improve this question























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      I am a mobile developer and have started using Google Place API for an autocomplete suggestion to find places in my app. But I have observed that Google Places API becomes costly when scaling. So i have made some optimization on the mobile side as follows:-



      1. Limit autocomplete for at least 3 character

      2. The threshold value to make API call for autocomplete is the 500millisecond difference between 2 characters typed

      3. Do a local caching of results with LRU mechanism

      With all this optimization client side is good. But now I am also thinking to optimize from backend API side also. For this optimization, I will create a wrapper for google places auto-complete api with server-side caching. This caching will have a time-span of 30days as per Google Guidelines.



      I need help to understand how can i design this?
      Like what key and value combination i need to store autocomplete suggestion?
      Should I use Redis or Hazelcast?
      I am writing my Backend in Java with aws server using micro-service architecture.
      Any already implemented solution where I can look into and learn.



      Please help as I am a newbie Backend developer.










      share|improve this question













      I am a mobile developer and have started using Google Place API for an autocomplete suggestion to find places in my app. But I have observed that Google Places API becomes costly when scaling. So i have made some optimization on the mobile side as follows:-



      1. Limit autocomplete for at least 3 character

      2. The threshold value to make API call for autocomplete is the 500millisecond difference between 2 characters typed

      3. Do a local caching of results with LRU mechanism

      With all this optimization client side is good. But now I am also thinking to optimize from backend API side also. For this optimization, I will create a wrapper for google places auto-complete api with server-side caching. This caching will have a time-span of 30days as per Google Guidelines.



      I need help to understand how can i design this?
      Like what key and value combination i need to store autocomplete suggestion?
      Should I use Redis or Hazelcast?
      I am writing my Backend in Java with aws server using micro-service architecture.
      Any already implemented solution where I can look into and learn.



      Please help as I am a newbie Backend developer.







      java amazon-web-services google-places-api backend hazelcast






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 10 at 3:48









      Balraj Singh

      1,25723769




      1,25723769






















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote



          accepted










          Before going down this path, have you done a cost analysis to see if this will be worthwhile? Keep in mind, this is now code that you need to maintain, and cloud infrastructure does require some care and feeding. i.e. in your pricing analysis, don't forget to factor in your time to the cost calculations. Is it still financially worth it?



          Not knowing your transaction volumes, it sounds like you've done a fair amount of free optimization on the client side. If you add the server-side optimizations, you're effectively adding a cloud-to-cloud call, and the extra latency of the various AWS services you're using. Are you ok, taking a performance hit?



          If you still think it's worthwhile, the path I would recommend is to go the serverless route using API Gateway -> Lambda -> DynamoDB. This will keep your costs relatively low since Lambda has a fairly generous free tier. If you need faster performance, you can always insert Redis via Elasticache into the stack later.



          As far as what you need to store, you'd probably want to cache the various inputs the user is entering along with the information returned from the places API. For example, you'll probably want to capture search string, location information, and then any of the fields you want (e.g. place_id, icon, opening_hours, etc.); basically whatever you're using today. It would have very similar needs to your LRU cache.






          share|improve this answer




















          • Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
            – Balraj Singh
            Nov 11 at 2:49










          • I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
            – Jason Armstrong
            Nov 11 at 13:31










          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53235846%2fsuggestion-to-create-a-proxy-api-over-google-places-autocomplete-api%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          0
          down vote



          accepted










          Before going down this path, have you done a cost analysis to see if this will be worthwhile? Keep in mind, this is now code that you need to maintain, and cloud infrastructure does require some care and feeding. i.e. in your pricing analysis, don't forget to factor in your time to the cost calculations. Is it still financially worth it?



          Not knowing your transaction volumes, it sounds like you've done a fair amount of free optimization on the client side. If you add the server-side optimizations, you're effectively adding a cloud-to-cloud call, and the extra latency of the various AWS services you're using. Are you ok, taking a performance hit?



          If you still think it's worthwhile, the path I would recommend is to go the serverless route using API Gateway -> Lambda -> DynamoDB. This will keep your costs relatively low since Lambda has a fairly generous free tier. If you need faster performance, you can always insert Redis via Elasticache into the stack later.



          As far as what you need to store, you'd probably want to cache the various inputs the user is entering along with the information returned from the places API. For example, you'll probably want to capture search string, location information, and then any of the fields you want (e.g. place_id, icon, opening_hours, etc.); basically whatever you're using today. It would have very similar needs to your LRU cache.






          share|improve this answer




















          • Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
            – Balraj Singh
            Nov 11 at 2:49










          • I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
            – Jason Armstrong
            Nov 11 at 13:31














          up vote
          0
          down vote



          accepted










          Before going down this path, have you done a cost analysis to see if this will be worthwhile? Keep in mind, this is now code that you need to maintain, and cloud infrastructure does require some care and feeding. i.e. in your pricing analysis, don't forget to factor in your time to the cost calculations. Is it still financially worth it?



          Not knowing your transaction volumes, it sounds like you've done a fair amount of free optimization on the client side. If you add the server-side optimizations, you're effectively adding a cloud-to-cloud call, and the extra latency of the various AWS services you're using. Are you ok, taking a performance hit?



          If you still think it's worthwhile, the path I would recommend is to go the serverless route using API Gateway -> Lambda -> DynamoDB. This will keep your costs relatively low since Lambda has a fairly generous free tier. If you need faster performance, you can always insert Redis via Elasticache into the stack later.



          As far as what you need to store, you'd probably want to cache the various inputs the user is entering along with the information returned from the places API. For example, you'll probably want to capture search string, location information, and then any of the fields you want (e.g. place_id, icon, opening_hours, etc.); basically whatever you're using today. It would have very similar needs to your LRU cache.






          share|improve this answer




















          • Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
            – Balraj Singh
            Nov 11 at 2:49










          • I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
            – Jason Armstrong
            Nov 11 at 13:31












          up vote
          0
          down vote



          accepted







          up vote
          0
          down vote



          accepted






          Before going down this path, have you done a cost analysis to see if this will be worthwhile? Keep in mind, this is now code that you need to maintain, and cloud infrastructure does require some care and feeding. i.e. in your pricing analysis, don't forget to factor in your time to the cost calculations. Is it still financially worth it?



          Not knowing your transaction volumes, it sounds like you've done a fair amount of free optimization on the client side. If you add the server-side optimizations, you're effectively adding a cloud-to-cloud call, and the extra latency of the various AWS services you're using. Are you ok, taking a performance hit?



          If you still think it's worthwhile, the path I would recommend is to go the serverless route using API Gateway -> Lambda -> DynamoDB. This will keep your costs relatively low since Lambda has a fairly generous free tier. If you need faster performance, you can always insert Redis via Elasticache into the stack later.



          As far as what you need to store, you'd probably want to cache the various inputs the user is entering along with the information returned from the places API. For example, you'll probably want to capture search string, location information, and then any of the fields you want (e.g. place_id, icon, opening_hours, etc.); basically whatever you're using today. It would have very similar needs to your LRU cache.






          share|improve this answer












          Before going down this path, have you done a cost analysis to see if this will be worthwhile? Keep in mind, this is now code that you need to maintain, and cloud infrastructure does require some care and feeding. i.e. in your pricing analysis, don't forget to factor in your time to the cost calculations. Is it still financially worth it?



          Not knowing your transaction volumes, it sounds like you've done a fair amount of free optimization on the client side. If you add the server-side optimizations, you're effectively adding a cloud-to-cloud call, and the extra latency of the various AWS services you're using. Are you ok, taking a performance hit?



          If you still think it's worthwhile, the path I would recommend is to go the serverless route using API Gateway -> Lambda -> DynamoDB. This will keep your costs relatively low since Lambda has a fairly generous free tier. If you need faster performance, you can always insert Redis via Elasticache into the stack later.



          As far as what you need to store, you'd probably want to cache the various inputs the user is entering along with the information returned from the places API. For example, you'll probably want to capture search string, location information, and then any of the fields you want (e.g. place_id, icon, opening_hours, etc.); basically whatever you're using today. It would have very similar needs to your LRU cache.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 10 at 13:26









          Jason Armstrong

          584111




          584111











          • Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
            – Balraj Singh
            Nov 11 at 2:49










          • I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
            – Jason Armstrong
            Nov 11 at 13:31
















          • Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
            – Balraj Singh
            Nov 11 at 2:49










          • I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
            – Jason Armstrong
            Nov 11 at 13:31















          Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
          – Balraj Singh
          Nov 11 at 2:49




          Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
          – Balraj Singh
          Nov 11 at 2:49












          I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
          – Jason Armstrong
          Nov 11 at 13:31




          I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
          – Jason Armstrong
          Nov 11 at 13:31

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53235846%2fsuggestion-to-create-a-proxy-api-over-google-places-autocomplete-api%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Use pre created SQLite database for Android project in kotlin

          Darth Vader #20

          Ondo