Local cache of Postgresql table/view
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I'm developing a small NodeJS application, which uses data from a remote PostgreSQL database.
Everything works, but when scaling the lookup and round-trip time to/from the PostgreSQL are a hurdle.
I would like to implement a local cache of my table/view from PostgreSQL.
I've thought of using an In-Memory database like SQLite, and synchronize it on-the-go if something changes in PostgreSQL.
But... it seems a lot development and a bit cumbersome.
Are there more simple and straight-forward solutions?
postgresql sqlite caching memory
|
show 3 more comments
I'm developing a small NodeJS application, which uses data from a remote PostgreSQL database.
Everything works, but when scaling the lookup and round-trip time to/from the PostgreSQL are a hurdle.
I would like to implement a local cache of my table/view from PostgreSQL.
I've thought of using an In-Memory database like SQLite, and synchronize it on-the-go if something changes in PostgreSQL.
But... it seems a lot development and a bit cumbersome.
Are there more simple and straight-forward solutions?
postgresql sqlite caching memory
What kind of "hurdle" are you talking about? A good database server can sustain many, many requests per second, especially if those are only read requests. How many requests are you expecting that you think Postgres won't be able to handle them?
– a_horse_with_no_name
Nov 15 '18 at 11:47
It's not only PostgreSQL handling the requests, but also the round-trip time. It's around 5-10,000 requests per second.
– Alfred Balle
Nov 15 '18 at 12:24
10000 requests per second shouldn't be a problem on modern hardware - of course that depends on the nature of the requests. If those queries are aggregations that deal with hundreds of millions of rows then apparently not. But if those are "simple" lookups that can make use of indexes efficiently I wouldn't expect many problems. Get a server with a lot of CPUs, enough RAM and fast harddisks (preferrably SSDs) and you don't need the additional complexity of synchronizing an in-memory database
– a_horse_with_no_name
Nov 15 '18 at 12:32
Ok, that at least sounds good. But I'm still concerned about round-trip times. My Node.JS is running on Amazon server, and the PostgreSQL is via Compose (remotely).
– Alfred Balle
Nov 15 '18 at 12:43
I can get about 9000 (read/write) transactions/Second (using pgbench) on a 6 year old server in our local network running VMWare Postgres on it.
– a_horse_with_no_name
Nov 15 '18 at 13:00
|
show 3 more comments
I'm developing a small NodeJS application, which uses data from a remote PostgreSQL database.
Everything works, but when scaling the lookup and round-trip time to/from the PostgreSQL are a hurdle.
I would like to implement a local cache of my table/view from PostgreSQL.
I've thought of using an In-Memory database like SQLite, and synchronize it on-the-go if something changes in PostgreSQL.
But... it seems a lot development and a bit cumbersome.
Are there more simple and straight-forward solutions?
postgresql sqlite caching memory
I'm developing a small NodeJS application, which uses data from a remote PostgreSQL database.
Everything works, but when scaling the lookup and round-trip time to/from the PostgreSQL are a hurdle.
I would like to implement a local cache of my table/view from PostgreSQL.
I've thought of using an In-Memory database like SQLite, and synchronize it on-the-go if something changes in PostgreSQL.
But... it seems a lot development and a bit cumbersome.
Are there more simple and straight-forward solutions?
postgresql sqlite caching memory
postgresql sqlite caching memory
edited Nov 15 '18 at 11:46
a_horse_with_no_name
307k46468567
307k46468567
asked Nov 15 '18 at 9:01
Alfred BalleAlfred Balle
1579
1579
What kind of "hurdle" are you talking about? A good database server can sustain many, many requests per second, especially if those are only read requests. How many requests are you expecting that you think Postgres won't be able to handle them?
– a_horse_with_no_name
Nov 15 '18 at 11:47
It's not only PostgreSQL handling the requests, but also the round-trip time. It's around 5-10,000 requests per second.
– Alfred Balle
Nov 15 '18 at 12:24
10000 requests per second shouldn't be a problem on modern hardware - of course that depends on the nature of the requests. If those queries are aggregations that deal with hundreds of millions of rows then apparently not. But if those are "simple" lookups that can make use of indexes efficiently I wouldn't expect many problems. Get a server with a lot of CPUs, enough RAM and fast harddisks (preferrably SSDs) and you don't need the additional complexity of synchronizing an in-memory database
– a_horse_with_no_name
Nov 15 '18 at 12:32
Ok, that at least sounds good. But I'm still concerned about round-trip times. My Node.JS is running on Amazon server, and the PostgreSQL is via Compose (remotely).
– Alfred Balle
Nov 15 '18 at 12:43
I can get about 9000 (read/write) transactions/Second (using pgbench) on a 6 year old server in our local network running VMWare Postgres on it.
– a_horse_with_no_name
Nov 15 '18 at 13:00
|
show 3 more comments
What kind of "hurdle" are you talking about? A good database server can sustain many, many requests per second, especially if those are only read requests. How many requests are you expecting that you think Postgres won't be able to handle them?
– a_horse_with_no_name
Nov 15 '18 at 11:47
It's not only PostgreSQL handling the requests, but also the round-trip time. It's around 5-10,000 requests per second.
– Alfred Balle
Nov 15 '18 at 12:24
10000 requests per second shouldn't be a problem on modern hardware - of course that depends on the nature of the requests. If those queries are aggregations that deal with hundreds of millions of rows then apparently not. But if those are "simple" lookups that can make use of indexes efficiently I wouldn't expect many problems. Get a server with a lot of CPUs, enough RAM and fast harddisks (preferrably SSDs) and you don't need the additional complexity of synchronizing an in-memory database
– a_horse_with_no_name
Nov 15 '18 at 12:32
Ok, that at least sounds good. But I'm still concerned about round-trip times. My Node.JS is running on Amazon server, and the PostgreSQL is via Compose (remotely).
– Alfred Balle
Nov 15 '18 at 12:43
I can get about 9000 (read/write) transactions/Second (using pgbench) on a 6 year old server in our local network running VMWare Postgres on it.
– a_horse_with_no_name
Nov 15 '18 at 13:00
What kind of "hurdle" are you talking about? A good database server can sustain many, many requests per second, especially if those are only read requests. How many requests are you expecting that you think Postgres won't be able to handle them?
– a_horse_with_no_name
Nov 15 '18 at 11:47
What kind of "hurdle" are you talking about? A good database server can sustain many, many requests per second, especially if those are only read requests. How many requests are you expecting that you think Postgres won't be able to handle them?
– a_horse_with_no_name
Nov 15 '18 at 11:47
It's not only PostgreSQL handling the requests, but also the round-trip time. It's around 5-10,000 requests per second.
– Alfred Balle
Nov 15 '18 at 12:24
It's not only PostgreSQL handling the requests, but also the round-trip time. It's around 5-10,000 requests per second.
– Alfred Balle
Nov 15 '18 at 12:24
10000 requests per second shouldn't be a problem on modern hardware - of course that depends on the nature of the requests. If those queries are aggregations that deal with hundreds of millions of rows then apparently not. But if those are "simple" lookups that can make use of indexes efficiently I wouldn't expect many problems. Get a server with a lot of CPUs, enough RAM and fast harddisks (preferrably SSDs) and you don't need the additional complexity of synchronizing an in-memory database
– a_horse_with_no_name
Nov 15 '18 at 12:32
10000 requests per second shouldn't be a problem on modern hardware - of course that depends on the nature of the requests. If those queries are aggregations that deal with hundreds of millions of rows then apparently not. But if those are "simple" lookups that can make use of indexes efficiently I wouldn't expect many problems. Get a server with a lot of CPUs, enough RAM and fast harddisks (preferrably SSDs) and you don't need the additional complexity of synchronizing an in-memory database
– a_horse_with_no_name
Nov 15 '18 at 12:32
Ok, that at least sounds good. But I'm still concerned about round-trip times. My Node.JS is running on Amazon server, and the PostgreSQL is via Compose (remotely).
– Alfred Balle
Nov 15 '18 at 12:43
Ok, that at least sounds good. But I'm still concerned about round-trip times. My Node.JS is running on Amazon server, and the PostgreSQL is via Compose (remotely).
– Alfred Balle
Nov 15 '18 at 12:43
I can get about 9000 (read/write) transactions/Second (using pgbench) on a 6 year old server in our local network running VMWare Postgres on it.
– a_horse_with_no_name
Nov 15 '18 at 13:00
I can get about 9000 (read/write) transactions/Second (using pgbench) on a 6 year old server in our local network running VMWare Postgres on it.
– a_horse_with_no_name
Nov 15 '18 at 13:00
|
show 3 more comments
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53315715%2flocal-cache-of-postgresql-table-view%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53315715%2flocal-cache-of-postgresql-table-view%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
What kind of "hurdle" are you talking about? A good database server can sustain many, many requests per second, especially if those are only read requests. How many requests are you expecting that you think Postgres won't be able to handle them?
– a_horse_with_no_name
Nov 15 '18 at 11:47
It's not only PostgreSQL handling the requests, but also the round-trip time. It's around 5-10,000 requests per second.
– Alfred Balle
Nov 15 '18 at 12:24
10000 requests per second shouldn't be a problem on modern hardware - of course that depends on the nature of the requests. If those queries are aggregations that deal with hundreds of millions of rows then apparently not. But if those are "simple" lookups that can make use of indexes efficiently I wouldn't expect many problems. Get a server with a lot of CPUs, enough RAM and fast harddisks (preferrably SSDs) and you don't need the additional complexity of synchronizing an in-memory database
– a_horse_with_no_name
Nov 15 '18 at 12:32
Ok, that at least sounds good. But I'm still concerned about round-trip times. My Node.JS is running on Amazon server, and the PostgreSQL is via Compose (remotely).
– Alfred Balle
Nov 15 '18 at 12:43
I can get about 9000 (read/write) transactions/Second (using pgbench) on a 6 year old server in our local network running VMWare Postgres on it.
– a_horse_with_no_name
Nov 15 '18 at 13:00