Suggestion to create a proxy API over Google Places autocomplete API
up vote
1
down vote
favorite
I am a mobile developer and have started using Google Place API for an autocomplete suggestion to find places in my app. But I have observed that Google Places API becomes costly when scaling. So i have made some optimization on the mobile side as follows:-
- Limit autocomplete for at least 3 character
- The threshold value to make API call for autocomplete is the 500millisecond difference between 2 characters typed
- Do a local caching of results with LRU mechanism
With all this optimization client side is good. But now I am also thinking to optimize from backend API side also. For this optimization, I will create a wrapper for google places auto-complete api with server-side caching. This caching will have a time-span of 30days as per Google Guidelines.
I need help to understand how can i design this?
Like what key and value combination i need to store autocomplete suggestion?
Should I use Redis or Hazelcast?
I am writing my Backend in Java with aws server using micro-service architecture.
Any already implemented solution where I can look into and learn.
Please help as I am a newbie Backend developer.
java amazon-web-services google-places-api backend hazelcast
add a comment |
up vote
1
down vote
favorite
I am a mobile developer and have started using Google Place API for an autocomplete suggestion to find places in my app. But I have observed that Google Places API becomes costly when scaling. So i have made some optimization on the mobile side as follows:-
- Limit autocomplete for at least 3 character
- The threshold value to make API call for autocomplete is the 500millisecond difference between 2 characters typed
- Do a local caching of results with LRU mechanism
With all this optimization client side is good. But now I am also thinking to optimize from backend API side also. For this optimization, I will create a wrapper for google places auto-complete api with server-side caching. This caching will have a time-span of 30days as per Google Guidelines.
I need help to understand how can i design this?
Like what key and value combination i need to store autocomplete suggestion?
Should I use Redis or Hazelcast?
I am writing my Backend in Java with aws server using micro-service architecture.
Any already implemented solution where I can look into and learn.
Please help as I am a newbie Backend developer.
java amazon-web-services google-places-api backend hazelcast
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I am a mobile developer and have started using Google Place API for an autocomplete suggestion to find places in my app. But I have observed that Google Places API becomes costly when scaling. So i have made some optimization on the mobile side as follows:-
- Limit autocomplete for at least 3 character
- The threshold value to make API call for autocomplete is the 500millisecond difference between 2 characters typed
- Do a local caching of results with LRU mechanism
With all this optimization client side is good. But now I am also thinking to optimize from backend API side also. For this optimization, I will create a wrapper for google places auto-complete api with server-side caching. This caching will have a time-span of 30days as per Google Guidelines.
I need help to understand how can i design this?
Like what key and value combination i need to store autocomplete suggestion?
Should I use Redis or Hazelcast?
I am writing my Backend in Java with aws server using micro-service architecture.
Any already implemented solution where I can look into and learn.
Please help as I am a newbie Backend developer.
java amazon-web-services google-places-api backend hazelcast
I am a mobile developer and have started using Google Place API for an autocomplete suggestion to find places in my app. But I have observed that Google Places API becomes costly when scaling. So i have made some optimization on the mobile side as follows:-
- Limit autocomplete for at least 3 character
- The threshold value to make API call for autocomplete is the 500millisecond difference between 2 characters typed
- Do a local caching of results with LRU mechanism
With all this optimization client side is good. But now I am also thinking to optimize from backend API side also. For this optimization, I will create a wrapper for google places auto-complete api with server-side caching. This caching will have a time-span of 30days as per Google Guidelines.
I need help to understand how can i design this?
Like what key and value combination i need to store autocomplete suggestion?
Should I use Redis or Hazelcast?
I am writing my Backend in Java with aws server using micro-service architecture.
Any already implemented solution where I can look into and learn.
Please help as I am a newbie Backend developer.
java amazon-web-services google-places-api backend hazelcast
java amazon-web-services google-places-api backend hazelcast
asked Nov 10 at 3:48
Balraj Singh
1,25723769
1,25723769
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
accepted
Before going down this path, have you done a cost analysis to see if this will be worthwhile? Keep in mind, this is now code that you need to maintain, and cloud infrastructure does require some care and feeding. i.e. in your pricing analysis, don't forget to factor in your time to the cost calculations. Is it still financially worth it?
Not knowing your transaction volumes, it sounds like you've done a fair amount of free optimization on the client side. If you add the server-side optimizations, you're effectively adding a cloud-to-cloud call, and the extra latency of the various AWS services you're using. Are you ok, taking a performance hit?
If you still think it's worthwhile, the path I would recommend is to go the serverless route using API Gateway -> Lambda -> DynamoDB. This will keep your costs relatively low since Lambda has a fairly generous free tier. If you need faster performance, you can always insert Redis via Elasticache into the stack later.
As far as what you need to store, you'd probably want to cache the various inputs the user is entering along with the information returned from the places API. For example, you'll probably want to capture search string, location information, and then any of the fields you want (e.g. place_id, icon, opening_hours, etc.); basically whatever you're using today. It would have very similar needs to your LRU cache.
Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
– Balraj Singh
Nov 11 at 2:49
I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
– Jason Armstrong
Nov 11 at 13:31
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
accepted
Before going down this path, have you done a cost analysis to see if this will be worthwhile? Keep in mind, this is now code that you need to maintain, and cloud infrastructure does require some care and feeding. i.e. in your pricing analysis, don't forget to factor in your time to the cost calculations. Is it still financially worth it?
Not knowing your transaction volumes, it sounds like you've done a fair amount of free optimization on the client side. If you add the server-side optimizations, you're effectively adding a cloud-to-cloud call, and the extra latency of the various AWS services you're using. Are you ok, taking a performance hit?
If you still think it's worthwhile, the path I would recommend is to go the serverless route using API Gateway -> Lambda -> DynamoDB. This will keep your costs relatively low since Lambda has a fairly generous free tier. If you need faster performance, you can always insert Redis via Elasticache into the stack later.
As far as what you need to store, you'd probably want to cache the various inputs the user is entering along with the information returned from the places API. For example, you'll probably want to capture search string, location information, and then any of the fields you want (e.g. place_id, icon, opening_hours, etc.); basically whatever you're using today. It would have very similar needs to your LRU cache.
Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
– Balraj Singh
Nov 11 at 2:49
I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
– Jason Armstrong
Nov 11 at 13:31
add a comment |
up vote
0
down vote
accepted
Before going down this path, have you done a cost analysis to see if this will be worthwhile? Keep in mind, this is now code that you need to maintain, and cloud infrastructure does require some care and feeding. i.e. in your pricing analysis, don't forget to factor in your time to the cost calculations. Is it still financially worth it?
Not knowing your transaction volumes, it sounds like you've done a fair amount of free optimization on the client side. If you add the server-side optimizations, you're effectively adding a cloud-to-cloud call, and the extra latency of the various AWS services you're using. Are you ok, taking a performance hit?
If you still think it's worthwhile, the path I would recommend is to go the serverless route using API Gateway -> Lambda -> DynamoDB. This will keep your costs relatively low since Lambda has a fairly generous free tier. If you need faster performance, you can always insert Redis via Elasticache into the stack later.
As far as what you need to store, you'd probably want to cache the various inputs the user is entering along with the information returned from the places API. For example, you'll probably want to capture search string, location information, and then any of the fields you want (e.g. place_id, icon, opening_hours, etc.); basically whatever you're using today. It would have very similar needs to your LRU cache.
Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
– Balraj Singh
Nov 11 at 2:49
I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
– Jason Armstrong
Nov 11 at 13:31
add a comment |
up vote
0
down vote
accepted
up vote
0
down vote
accepted
Before going down this path, have you done a cost analysis to see if this will be worthwhile? Keep in mind, this is now code that you need to maintain, and cloud infrastructure does require some care and feeding. i.e. in your pricing analysis, don't forget to factor in your time to the cost calculations. Is it still financially worth it?
Not knowing your transaction volumes, it sounds like you've done a fair amount of free optimization on the client side. If you add the server-side optimizations, you're effectively adding a cloud-to-cloud call, and the extra latency of the various AWS services you're using. Are you ok, taking a performance hit?
If you still think it's worthwhile, the path I would recommend is to go the serverless route using API Gateway -> Lambda -> DynamoDB. This will keep your costs relatively low since Lambda has a fairly generous free tier. If you need faster performance, you can always insert Redis via Elasticache into the stack later.
As far as what you need to store, you'd probably want to cache the various inputs the user is entering along with the information returned from the places API. For example, you'll probably want to capture search string, location information, and then any of the fields you want (e.g. place_id, icon, opening_hours, etc.); basically whatever you're using today. It would have very similar needs to your LRU cache.
Before going down this path, have you done a cost analysis to see if this will be worthwhile? Keep in mind, this is now code that you need to maintain, and cloud infrastructure does require some care and feeding. i.e. in your pricing analysis, don't forget to factor in your time to the cost calculations. Is it still financially worth it?
Not knowing your transaction volumes, it sounds like you've done a fair amount of free optimization on the client side. If you add the server-side optimizations, you're effectively adding a cloud-to-cloud call, and the extra latency of the various AWS services you're using. Are you ok, taking a performance hit?
If you still think it's worthwhile, the path I would recommend is to go the serverless route using API Gateway -> Lambda -> DynamoDB. This will keep your costs relatively low since Lambda has a fairly generous free tier. If you need faster performance, you can always insert Redis via Elasticache into the stack later.
As far as what you need to store, you'd probably want to cache the various inputs the user is entering along with the information returned from the places API. For example, you'll probably want to capture search string, location information, and then any of the fields you want (e.g. place_id, icon, opening_hours, etc.); basically whatever you're using today. It would have very similar needs to your LRU cache.
answered Nov 10 at 13:26
Jason Armstrong
584111
584111
Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
– Balraj Singh
Nov 11 at 2:49
I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
– Jason Armstrong
Nov 11 at 13:31
add a comment |
Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
– Balraj Singh
Nov 11 at 2:49
I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
– Jason Armstrong
Nov 11 at 13:31
Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
– Balraj Singh
Nov 11 at 2:49
Hey @Jason thanks for the above pointers. I also wish to know if we can use Autocomplete requests, per session – Cost rather than per request based cost. Can that also help us optimising this from client side or server side? link: developers.google.com/places/web-service/…
– Balraj Singh
Nov 11 at 2:49
I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
– Jason Armstrong
Nov 11 at 13:31
I'm not quite sure what you're asking. I think you're asking if there's a cost benefit to going per session vs. per request. AWS doesn't really have a concept of pricing per session, everything in the serverless side is per request. The other option is to go the EC2 route which would equate to always having a server up and running along with probably wanting to add in an Elastic Load Balancer. The best advice I can give you is if you know roughly how many requests you're processing, run some numbers for both scenarios along with realistic growth projections and see if it makes sense.
– Jason Armstrong
Nov 11 at 13:31
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53235846%2fsuggestion-to-create-a-proxy-api-over-google-places-autocomplete-api%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown