Update central cache with different system data change in microservices scale architecture
up vote
0
down vote
favorite
We're building a microservice system which new data can come from three(or more) different sources and which eventually effects the end user.
It doesn't matter what the purpose of the system for the question so I'll really try to make it simple. Please see the attached diagram.
Data can come from the following sources:
Back-office site: define the system and user configurations.
Main site: where user interact with the site and make actions.
External sources data: such as partners which can gives additional data(supplementary information) about users.
The services are:
Site-back-office service: serve the back-office site.
User-service: serve the main site.
Import service: imports additional data(supplementary information) from external sources.
User cache service: sync with all the above system data and combine them to pre-prepared cache responses. The reason for that is because the main site should serve hundreds of millions of user and should work with very low latency.
The main idea is:
- Each microservice has its own db.
- Each microservice can scale.
- Each data change on one of the three parts effects the user and should be sent to the cache service so it eventually be reflect on the main site.
- The cache (Redis) holds all data combined to pre-prepared responses for the main-site.
- Each service data change will be published to pubsub topic for the cache-service to update the Redis db.
- The system should serve around 200 million of users.
So... the questions are: .
- since the User-cache service can(and must) be scale, what happen if, for example, there are two update data messages waiting on pubsub, one is old and one is new. how to process only the new message and prevent the case when one cache-service instance update the new message data to Redis and only after another cache-service instance override it with the old message.
- There is also a case when the Cache-service instance need to first read the current cache user data, make the change on it and only then update the cache with the new data. How to prevent the case when two instances for example read the current cache data while a third instance update it with new data and they override it with their data.
Is it at all possible to pre-prepare responses based on several sources which can periodically change?? what is the right approach to this problem?
caching scale microservices google-cloud-pubsub
add a comment |
up vote
0
down vote
favorite
We're building a microservice system which new data can come from three(or more) different sources and which eventually effects the end user.
It doesn't matter what the purpose of the system for the question so I'll really try to make it simple. Please see the attached diagram.
Data can come from the following sources:
Back-office site: define the system and user configurations.
Main site: where user interact with the site and make actions.
External sources data: such as partners which can gives additional data(supplementary information) about users.
The services are:
Site-back-office service: serve the back-office site.
User-service: serve the main site.
Import service: imports additional data(supplementary information) from external sources.
User cache service: sync with all the above system data and combine them to pre-prepared cache responses. The reason for that is because the main site should serve hundreds of millions of user and should work with very low latency.
The main idea is:
- Each microservice has its own db.
- Each microservice can scale.
- Each data change on one of the three parts effects the user and should be sent to the cache service so it eventually be reflect on the main site.
- The cache (Redis) holds all data combined to pre-prepared responses for the main-site.
- Each service data change will be published to pubsub topic for the cache-service to update the Redis db.
- The system should serve around 200 million of users.
So... the questions are: .
- since the User-cache service can(and must) be scale, what happen if, for example, there are two update data messages waiting on pubsub, one is old and one is new. how to process only the new message and prevent the case when one cache-service instance update the new message data to Redis and only after another cache-service instance override it with the old message.
- There is also a case when the Cache-service instance need to first read the current cache user data, make the change on it and only then update the cache with the new data. How to prevent the case when two instances for example read the current cache data while a third instance update it with new data and they override it with their data.
Is it at all possible to pre-prepare responses based on several sources which can periodically change?? what is the right approach to this problem?
caching scale microservices google-cloud-pubsub
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
We're building a microservice system which new data can come from three(or more) different sources and which eventually effects the end user.
It doesn't matter what the purpose of the system for the question so I'll really try to make it simple. Please see the attached diagram.
Data can come from the following sources:
Back-office site: define the system and user configurations.
Main site: where user interact with the site and make actions.
External sources data: such as partners which can gives additional data(supplementary information) about users.
The services are:
Site-back-office service: serve the back-office site.
User-service: serve the main site.
Import service: imports additional data(supplementary information) from external sources.
User cache service: sync with all the above system data and combine them to pre-prepared cache responses. The reason for that is because the main site should serve hundreds of millions of user and should work with very low latency.
The main idea is:
- Each microservice has its own db.
- Each microservice can scale.
- Each data change on one of the three parts effects the user and should be sent to the cache service so it eventually be reflect on the main site.
- The cache (Redis) holds all data combined to pre-prepared responses for the main-site.
- Each service data change will be published to pubsub topic for the cache-service to update the Redis db.
- The system should serve around 200 million of users.
So... the questions are: .
- since the User-cache service can(and must) be scale, what happen if, for example, there are two update data messages waiting on pubsub, one is old and one is new. how to process only the new message and prevent the case when one cache-service instance update the new message data to Redis and only after another cache-service instance override it with the old message.
- There is also a case when the Cache-service instance need to first read the current cache user data, make the change on it and only then update the cache with the new data. How to prevent the case when two instances for example read the current cache data while a third instance update it with new data and they override it with their data.
Is it at all possible to pre-prepare responses based on several sources which can periodically change?? what is the right approach to this problem?
caching scale microservices google-cloud-pubsub
We're building a microservice system which new data can come from three(or more) different sources and which eventually effects the end user.
It doesn't matter what the purpose of the system for the question so I'll really try to make it simple. Please see the attached diagram.
Data can come from the following sources:
Back-office site: define the system and user configurations.
Main site: where user interact with the site and make actions.
External sources data: such as partners which can gives additional data(supplementary information) about users.
The services are:
Site-back-office service: serve the back-office site.
User-service: serve the main site.
Import service: imports additional data(supplementary information) from external sources.
User cache service: sync with all the above system data and combine them to pre-prepared cache responses. The reason for that is because the main site should serve hundreds of millions of user and should work with very low latency.
The main idea is:
- Each microservice has its own db.
- Each microservice can scale.
- Each data change on one of the three parts effects the user and should be sent to the cache service so it eventually be reflect on the main site.
- The cache (Redis) holds all data combined to pre-prepared responses for the main-site.
- Each service data change will be published to pubsub topic for the cache-service to update the Redis db.
- The system should serve around 200 million of users.
So... the questions are: .
- since the User-cache service can(and must) be scale, what happen if, for example, there are two update data messages waiting on pubsub, one is old and one is new. how to process only the new message and prevent the case when one cache-service instance update the new message data to Redis and only after another cache-service instance override it with the old message.
- There is also a case when the Cache-service instance need to first read the current cache user data, make the change on it and only then update the cache with the new data. How to prevent the case when two instances for example read the current cache data while a third instance update it with new data and they override it with their data.
Is it at all possible to pre-prepare responses based on several sources which can periodically change?? what is the right approach to this problem?
caching scale microservices google-cloud-pubsub
caching scale microservices google-cloud-pubsub
edited Nov 11 at 9:25
Geert Bellekens
7,5231035
7,5231035
asked Nov 10 at 8:32
tomn
654
654
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
I'll try to address some of your points, let me know if I misunderstood what you're asking.
1) I believe you're asking about how to enforce ordering of messages, that an old update does not override a newer one. There "publish_time" field of a message (https://cloud.google.com/pubsub/docs/reference/rpc/google.pubsub.v1#google.pubsub.v1.PubsubMessage) to coordinate based on the time the cloud pubsub server received your publish request. If you wish to coordinate based on some other time or ordering mechanism, you can add an attribute to your PubsubMessage or payload to do so.
2) This seems to be a general synchronization problem, not necessarily related to cloud pubsub; I'll leave this to others to answer.
3) Cloud dataflow implements a windowing and watermark mechanism similar to what you're describing. Perhaps you could use this to remove conflicting updates and perform preprocessing prior to writing them to the backing store.
https://beam.apache.org/documentation/programming-guide/#windowing
-Daniel
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
I'll try to address some of your points, let me know if I misunderstood what you're asking.
1) I believe you're asking about how to enforce ordering of messages, that an old update does not override a newer one. There "publish_time" field of a message (https://cloud.google.com/pubsub/docs/reference/rpc/google.pubsub.v1#google.pubsub.v1.PubsubMessage) to coordinate based on the time the cloud pubsub server received your publish request. If you wish to coordinate based on some other time or ordering mechanism, you can add an attribute to your PubsubMessage or payload to do so.
2) This seems to be a general synchronization problem, not necessarily related to cloud pubsub; I'll leave this to others to answer.
3) Cloud dataflow implements a windowing and watermark mechanism similar to what you're describing. Perhaps you could use this to remove conflicting updates and perform preprocessing prior to writing them to the backing store.
https://beam.apache.org/documentation/programming-guide/#windowing
-Daniel
add a comment |
up vote
0
down vote
I'll try to address some of your points, let me know if I misunderstood what you're asking.
1) I believe you're asking about how to enforce ordering of messages, that an old update does not override a newer one. There "publish_time" field of a message (https://cloud.google.com/pubsub/docs/reference/rpc/google.pubsub.v1#google.pubsub.v1.PubsubMessage) to coordinate based on the time the cloud pubsub server received your publish request. If you wish to coordinate based on some other time or ordering mechanism, you can add an attribute to your PubsubMessage or payload to do so.
2) This seems to be a general synchronization problem, not necessarily related to cloud pubsub; I'll leave this to others to answer.
3) Cloud dataflow implements a windowing and watermark mechanism similar to what you're describing. Perhaps you could use this to remove conflicting updates and perform preprocessing prior to writing them to the backing store.
https://beam.apache.org/documentation/programming-guide/#windowing
-Daniel
add a comment |
up vote
0
down vote
up vote
0
down vote
I'll try to address some of your points, let me know if I misunderstood what you're asking.
1) I believe you're asking about how to enforce ordering of messages, that an old update does not override a newer one. There "publish_time" field of a message (https://cloud.google.com/pubsub/docs/reference/rpc/google.pubsub.v1#google.pubsub.v1.PubsubMessage) to coordinate based on the time the cloud pubsub server received your publish request. If you wish to coordinate based on some other time or ordering mechanism, you can add an attribute to your PubsubMessage or payload to do so.
2) This seems to be a general synchronization problem, not necessarily related to cloud pubsub; I'll leave this to others to answer.
3) Cloud dataflow implements a windowing and watermark mechanism similar to what you're describing. Perhaps you could use this to remove conflicting updates and perform preprocessing prior to writing them to the backing store.
https://beam.apache.org/documentation/programming-guide/#windowing
-Daniel
I'll try to address some of your points, let me know if I misunderstood what you're asking.
1) I believe you're asking about how to enforce ordering of messages, that an old update does not override a newer one. There "publish_time" field of a message (https://cloud.google.com/pubsub/docs/reference/rpc/google.pubsub.v1#google.pubsub.v1.PubsubMessage) to coordinate based on the time the cloud pubsub server received your publish request. If you wish to coordinate based on some other time or ordering mechanism, you can add an attribute to your PubsubMessage or payload to do so.
2) This seems to be a general synchronization problem, not necessarily related to cloud pubsub; I'll leave this to others to answer.
3) Cloud dataflow implements a windowing and watermark mechanism similar to what you're describing. Perhaps you could use this to remove conflicting updates and perform preprocessing prior to writing them to the backing store.
https://beam.apache.org/documentation/programming-guide/#windowing
-Daniel
answered Nov 13 at 19:25
Daniel Collins
611
611
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53237306%2fupdate-central-cache-with-different-system-data-change-in-microservices-scale-ar%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown