My Bacula Transfer rate it's extremely low. How can i configure the director to improve this?
I'm creating a cloudbased solution with bacula to have backups of all our servers. We have lot's of server on premise and some others on clouds like AWS and OPENCLOUD.
i'm seeing very low transfer rates for the backups made (i'm talking about 400kbps/s) and for small FS about 5/6GB each... this worries me because we are testing with this clients, making backups full which last 2 hour to finish, and we will be adding some really big clients later (about 500GB/1TB each) to make full backups of them
This is the message printed after the backup of this clients :
09-Nov 03:43 bacula-dir JobId 37: Start Backup JobId 37, Job=Backup-
mailserverp.2018-11-08_22.15.00_13
09-Nov 06:08 bacula-sd JobId 37: Elapsed time=02:24:57, Transfer rate=580
Bytes/second
Scheduled time: 08-Nov-2018 22:15:00
Start time: 09-Nov-2018 03:43:51
End time: 09-Nov-2018 06:08:52
Elapsed time: 2 hours 25 mins 1 sec
FD Bytes Written: 5,039,356 (5.039 MB)
SD Bytes Written: 5,048,922 (5.048 MB)
09-Nov 01:09 bacula-dir JobId 36: Start Backup JobId 36, Job=Backup-
nagios.2018-11-08_22.15.00_12
09-Nov 03:43 bacula-sd JobId 36: Elapsed time=02:34:39, Transfer rate=386.9 K
Bytes/second
Elapsed time: 2 hours 34 mins 47 secs
FD Bytes Written: 3,590,358,216 (3.590 GB)
SD Bytes Written: 3,590,441,488 (3.590 GB)
09-Nov 00:38 bacula-dir JobId 35: Start Backup JobId 35, Job=Backup-
bapuppet01.2018-11-08_21.25.00_11
09-Nov 00:38 bacula-sd JobId 34: Elapsed time=02:11:17, Transfer rate=35.68 K Bytes/second
Scheduled time: 08-Nov-2018 21:05:00
Start time: 08-Nov-2018 22:27:30
End time: 09-Nov-2018 00:38:52
Elapsed time: 2 hours 11 mins 22 secs
This message shows three servers, being two on premise and one on the OpenCloud Cloudbased hosting. Given the structure, we assume the onpremise client will be the slowest to backup of all. but after we test the connection using iperf command, we see the following :
FROM SERVER TO CLIENT
[root@otc-bacula ~]# iperf -c 172.xx.xx.xxx -p 9102 -i 2 -t 60
------------------------------------------------------------
Client connecting to 172.xx.xx.xxx, TCP port 9102
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 3] local 172.xx.xx.xxx port 33902 connected with 172.xx.xx.xxx port 9102
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 95.9 MBytes 402 Mbits/sec
[ 3] 2.0- 4.0 sec 180 MBytes 754 Mbits/sec
[ 3] 4.0- 6.0 sec 132 MBytes 554 Mbits/sec
[ 3] 6.0- 8.0 sec 70.6 MBytes 296 Mbits/sec
FROM CLIENT TO SERVER ################################
[root@v-nagios ~]# iperf -c 172.xx.xxx.xxx -p 9102 -i 2 -t 60
------------------------------------------------------------
Client connecting to 172.xx.xxx.xxx, TCP port 9102
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 172.xx.xxx.xxx port 41538 connected with 172.xxx.xxx.xxx port 9102
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 3.75 MBytes 15.7 Mbits/sec
[ 3] 2.0- 4.0 sec 4.00 MBytes 16.8 Mbits/sec
[ 3] 4.0- 6.0 sec 2.50 MBytes 10.5 Mbits/sec
[ 3] 6.0- 8.0 sec 4.38 MBytes 18.4 Mbits/sec
[ 3] 8.0-10.0 sec 3.50 MBytes 14.7 Mbits/sec
[ 3] 10.0-12.0 sec 2.12 MBytes 8.91 Mbits/sec
i have already tried despooling options, but i'm not seeing any improvement. Does anyone knows a way or a script to test performance of bacula and how to improve it ? i can try to reduce the FileSets the more possible just to backup what we really need, but still be a lot of files / gygas.
This is the Bacula-Dir.conf if it helps :
Director # define myself
Name = bacula-dir
DIRport = 9101 # where we listen for UA connections
DirAddress = 172.19.120.106
QueryFile = "/etc/bacula/query.sql"
WorkingDirectory = "/var/spool/bacula"
PidDirectory = "/var/run"
Maximum Concurrent Jobs = 10
Password = "123456" # Console password
Messages = Daemon
Heartbeat Interval = 1
Storage
Name = File
# Do not use "localhost" here
Address = 172.19.120.106 # N.B. Use a fully qualified name here
SDPort = 9103
Password = "123456"
Device = FileStorage
Media Type = File
Pool
Name = File
Pool Type = Backup
Label Format = OpenCloud-
Recycle = yes # Bacula can automatically recycle Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 365 days # one year
Maximum Volume Bytes = 100G # Limit Volume size to something reasonable
Maximum Volumes = 350 # Limit number of Volumes in Pool
I think this info is of use, but if any case you need more info of some service or need me to do something, i'll gladly provide the data needed... i'm really in need to increase the transfer rate of this solution.
linux backup bacula
add a comment |
I'm creating a cloudbased solution with bacula to have backups of all our servers. We have lot's of server on premise and some others on clouds like AWS and OPENCLOUD.
i'm seeing very low transfer rates for the backups made (i'm talking about 400kbps/s) and for small FS about 5/6GB each... this worries me because we are testing with this clients, making backups full which last 2 hour to finish, and we will be adding some really big clients later (about 500GB/1TB each) to make full backups of them
This is the message printed after the backup of this clients :
09-Nov 03:43 bacula-dir JobId 37: Start Backup JobId 37, Job=Backup-
mailserverp.2018-11-08_22.15.00_13
09-Nov 06:08 bacula-sd JobId 37: Elapsed time=02:24:57, Transfer rate=580
Bytes/second
Scheduled time: 08-Nov-2018 22:15:00
Start time: 09-Nov-2018 03:43:51
End time: 09-Nov-2018 06:08:52
Elapsed time: 2 hours 25 mins 1 sec
FD Bytes Written: 5,039,356 (5.039 MB)
SD Bytes Written: 5,048,922 (5.048 MB)
09-Nov 01:09 bacula-dir JobId 36: Start Backup JobId 36, Job=Backup-
nagios.2018-11-08_22.15.00_12
09-Nov 03:43 bacula-sd JobId 36: Elapsed time=02:34:39, Transfer rate=386.9 K
Bytes/second
Elapsed time: 2 hours 34 mins 47 secs
FD Bytes Written: 3,590,358,216 (3.590 GB)
SD Bytes Written: 3,590,441,488 (3.590 GB)
09-Nov 00:38 bacula-dir JobId 35: Start Backup JobId 35, Job=Backup-
bapuppet01.2018-11-08_21.25.00_11
09-Nov 00:38 bacula-sd JobId 34: Elapsed time=02:11:17, Transfer rate=35.68 K Bytes/second
Scheduled time: 08-Nov-2018 21:05:00
Start time: 08-Nov-2018 22:27:30
End time: 09-Nov-2018 00:38:52
Elapsed time: 2 hours 11 mins 22 secs
This message shows three servers, being two on premise and one on the OpenCloud Cloudbased hosting. Given the structure, we assume the onpremise client will be the slowest to backup of all. but after we test the connection using iperf command, we see the following :
FROM SERVER TO CLIENT
[root@otc-bacula ~]# iperf -c 172.xx.xx.xxx -p 9102 -i 2 -t 60
------------------------------------------------------------
Client connecting to 172.xx.xx.xxx, TCP port 9102
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 3] local 172.xx.xx.xxx port 33902 connected with 172.xx.xx.xxx port 9102
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 95.9 MBytes 402 Mbits/sec
[ 3] 2.0- 4.0 sec 180 MBytes 754 Mbits/sec
[ 3] 4.0- 6.0 sec 132 MBytes 554 Mbits/sec
[ 3] 6.0- 8.0 sec 70.6 MBytes 296 Mbits/sec
FROM CLIENT TO SERVER ################################
[root@v-nagios ~]# iperf -c 172.xx.xxx.xxx -p 9102 -i 2 -t 60
------------------------------------------------------------
Client connecting to 172.xx.xxx.xxx, TCP port 9102
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 172.xx.xxx.xxx port 41538 connected with 172.xxx.xxx.xxx port 9102
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 3.75 MBytes 15.7 Mbits/sec
[ 3] 2.0- 4.0 sec 4.00 MBytes 16.8 Mbits/sec
[ 3] 4.0- 6.0 sec 2.50 MBytes 10.5 Mbits/sec
[ 3] 6.0- 8.0 sec 4.38 MBytes 18.4 Mbits/sec
[ 3] 8.0-10.0 sec 3.50 MBytes 14.7 Mbits/sec
[ 3] 10.0-12.0 sec 2.12 MBytes 8.91 Mbits/sec
i have already tried despooling options, but i'm not seeing any improvement. Does anyone knows a way or a script to test performance of bacula and how to improve it ? i can try to reduce the FileSets the more possible just to backup what we really need, but still be a lot of files / gygas.
This is the Bacula-Dir.conf if it helps :
Director # define myself
Name = bacula-dir
DIRport = 9101 # where we listen for UA connections
DirAddress = 172.19.120.106
QueryFile = "/etc/bacula/query.sql"
WorkingDirectory = "/var/spool/bacula"
PidDirectory = "/var/run"
Maximum Concurrent Jobs = 10
Password = "123456" # Console password
Messages = Daemon
Heartbeat Interval = 1
Storage
Name = File
# Do not use "localhost" here
Address = 172.19.120.106 # N.B. Use a fully qualified name here
SDPort = 9103
Password = "123456"
Device = FileStorage
Media Type = File
Pool
Name = File
Pool Type = Backup
Label Format = OpenCloud-
Recycle = yes # Bacula can automatically recycle Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 365 days # one year
Maximum Volume Bytes = 100G # Limit Volume size to something reasonable
Maximum Volumes = 350 # Limit number of Volumes in Pool
I think this info is of use, but if any case you need more info of some service or need me to do something, i'll gladly provide the data needed... i'm really in need to increase the transfer rate of this solution.
linux backup bacula
add a comment |
I'm creating a cloudbased solution with bacula to have backups of all our servers. We have lot's of server on premise and some others on clouds like AWS and OPENCLOUD.
i'm seeing very low transfer rates for the backups made (i'm talking about 400kbps/s) and for small FS about 5/6GB each... this worries me because we are testing with this clients, making backups full which last 2 hour to finish, and we will be adding some really big clients later (about 500GB/1TB each) to make full backups of them
This is the message printed after the backup of this clients :
09-Nov 03:43 bacula-dir JobId 37: Start Backup JobId 37, Job=Backup-
mailserverp.2018-11-08_22.15.00_13
09-Nov 06:08 bacula-sd JobId 37: Elapsed time=02:24:57, Transfer rate=580
Bytes/second
Scheduled time: 08-Nov-2018 22:15:00
Start time: 09-Nov-2018 03:43:51
End time: 09-Nov-2018 06:08:52
Elapsed time: 2 hours 25 mins 1 sec
FD Bytes Written: 5,039,356 (5.039 MB)
SD Bytes Written: 5,048,922 (5.048 MB)
09-Nov 01:09 bacula-dir JobId 36: Start Backup JobId 36, Job=Backup-
nagios.2018-11-08_22.15.00_12
09-Nov 03:43 bacula-sd JobId 36: Elapsed time=02:34:39, Transfer rate=386.9 K
Bytes/second
Elapsed time: 2 hours 34 mins 47 secs
FD Bytes Written: 3,590,358,216 (3.590 GB)
SD Bytes Written: 3,590,441,488 (3.590 GB)
09-Nov 00:38 bacula-dir JobId 35: Start Backup JobId 35, Job=Backup-
bapuppet01.2018-11-08_21.25.00_11
09-Nov 00:38 bacula-sd JobId 34: Elapsed time=02:11:17, Transfer rate=35.68 K Bytes/second
Scheduled time: 08-Nov-2018 21:05:00
Start time: 08-Nov-2018 22:27:30
End time: 09-Nov-2018 00:38:52
Elapsed time: 2 hours 11 mins 22 secs
This message shows three servers, being two on premise and one on the OpenCloud Cloudbased hosting. Given the structure, we assume the onpremise client will be the slowest to backup of all. but after we test the connection using iperf command, we see the following :
FROM SERVER TO CLIENT
[root@otc-bacula ~]# iperf -c 172.xx.xx.xxx -p 9102 -i 2 -t 60
------------------------------------------------------------
Client connecting to 172.xx.xx.xxx, TCP port 9102
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 3] local 172.xx.xx.xxx port 33902 connected with 172.xx.xx.xxx port 9102
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 95.9 MBytes 402 Mbits/sec
[ 3] 2.0- 4.0 sec 180 MBytes 754 Mbits/sec
[ 3] 4.0- 6.0 sec 132 MBytes 554 Mbits/sec
[ 3] 6.0- 8.0 sec 70.6 MBytes 296 Mbits/sec
FROM CLIENT TO SERVER ################################
[root@v-nagios ~]# iperf -c 172.xx.xxx.xxx -p 9102 -i 2 -t 60
------------------------------------------------------------
Client connecting to 172.xx.xxx.xxx, TCP port 9102
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 172.xx.xxx.xxx port 41538 connected with 172.xxx.xxx.xxx port 9102
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 3.75 MBytes 15.7 Mbits/sec
[ 3] 2.0- 4.0 sec 4.00 MBytes 16.8 Mbits/sec
[ 3] 4.0- 6.0 sec 2.50 MBytes 10.5 Mbits/sec
[ 3] 6.0- 8.0 sec 4.38 MBytes 18.4 Mbits/sec
[ 3] 8.0-10.0 sec 3.50 MBytes 14.7 Mbits/sec
[ 3] 10.0-12.0 sec 2.12 MBytes 8.91 Mbits/sec
i have already tried despooling options, but i'm not seeing any improvement. Does anyone knows a way or a script to test performance of bacula and how to improve it ? i can try to reduce the FileSets the more possible just to backup what we really need, but still be a lot of files / gygas.
This is the Bacula-Dir.conf if it helps :
Director # define myself
Name = bacula-dir
DIRport = 9101 # where we listen for UA connections
DirAddress = 172.19.120.106
QueryFile = "/etc/bacula/query.sql"
WorkingDirectory = "/var/spool/bacula"
PidDirectory = "/var/run"
Maximum Concurrent Jobs = 10
Password = "123456" # Console password
Messages = Daemon
Heartbeat Interval = 1
Storage
Name = File
# Do not use "localhost" here
Address = 172.19.120.106 # N.B. Use a fully qualified name here
SDPort = 9103
Password = "123456"
Device = FileStorage
Media Type = File
Pool
Name = File
Pool Type = Backup
Label Format = OpenCloud-
Recycle = yes # Bacula can automatically recycle Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 365 days # one year
Maximum Volume Bytes = 100G # Limit Volume size to something reasonable
Maximum Volumes = 350 # Limit number of Volumes in Pool
I think this info is of use, but if any case you need more info of some service or need me to do something, i'll gladly provide the data needed... i'm really in need to increase the transfer rate of this solution.
linux backup bacula
I'm creating a cloudbased solution with bacula to have backups of all our servers. We have lot's of server on premise and some others on clouds like AWS and OPENCLOUD.
i'm seeing very low transfer rates for the backups made (i'm talking about 400kbps/s) and for small FS about 5/6GB each... this worries me because we are testing with this clients, making backups full which last 2 hour to finish, and we will be adding some really big clients later (about 500GB/1TB each) to make full backups of them
This is the message printed after the backup of this clients :
09-Nov 03:43 bacula-dir JobId 37: Start Backup JobId 37, Job=Backup-
mailserverp.2018-11-08_22.15.00_13
09-Nov 06:08 bacula-sd JobId 37: Elapsed time=02:24:57, Transfer rate=580
Bytes/second
Scheduled time: 08-Nov-2018 22:15:00
Start time: 09-Nov-2018 03:43:51
End time: 09-Nov-2018 06:08:52
Elapsed time: 2 hours 25 mins 1 sec
FD Bytes Written: 5,039,356 (5.039 MB)
SD Bytes Written: 5,048,922 (5.048 MB)
09-Nov 01:09 bacula-dir JobId 36: Start Backup JobId 36, Job=Backup-
nagios.2018-11-08_22.15.00_12
09-Nov 03:43 bacula-sd JobId 36: Elapsed time=02:34:39, Transfer rate=386.9 K
Bytes/second
Elapsed time: 2 hours 34 mins 47 secs
FD Bytes Written: 3,590,358,216 (3.590 GB)
SD Bytes Written: 3,590,441,488 (3.590 GB)
09-Nov 00:38 bacula-dir JobId 35: Start Backup JobId 35, Job=Backup-
bapuppet01.2018-11-08_21.25.00_11
09-Nov 00:38 bacula-sd JobId 34: Elapsed time=02:11:17, Transfer rate=35.68 K Bytes/second
Scheduled time: 08-Nov-2018 21:05:00
Start time: 08-Nov-2018 22:27:30
End time: 09-Nov-2018 00:38:52
Elapsed time: 2 hours 11 mins 22 secs
This message shows three servers, being two on premise and one on the OpenCloud Cloudbased hosting. Given the structure, we assume the onpremise client will be the slowest to backup of all. but after we test the connection using iperf command, we see the following :
FROM SERVER TO CLIENT
[root@otc-bacula ~]# iperf -c 172.xx.xx.xxx -p 9102 -i 2 -t 60
------------------------------------------------------------
Client connecting to 172.xx.xx.xxx, TCP port 9102
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 3] local 172.xx.xx.xxx port 33902 connected with 172.xx.xx.xxx port 9102
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 95.9 MBytes 402 Mbits/sec
[ 3] 2.0- 4.0 sec 180 MBytes 754 Mbits/sec
[ 3] 4.0- 6.0 sec 132 MBytes 554 Mbits/sec
[ 3] 6.0- 8.0 sec 70.6 MBytes 296 Mbits/sec
FROM CLIENT TO SERVER ################################
[root@v-nagios ~]# iperf -c 172.xx.xxx.xxx -p 9102 -i 2 -t 60
------------------------------------------------------------
Client connecting to 172.xx.xxx.xxx, TCP port 9102
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 172.xx.xxx.xxx port 41538 connected with 172.xxx.xxx.xxx port 9102
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 3.75 MBytes 15.7 Mbits/sec
[ 3] 2.0- 4.0 sec 4.00 MBytes 16.8 Mbits/sec
[ 3] 4.0- 6.0 sec 2.50 MBytes 10.5 Mbits/sec
[ 3] 6.0- 8.0 sec 4.38 MBytes 18.4 Mbits/sec
[ 3] 8.0-10.0 sec 3.50 MBytes 14.7 Mbits/sec
[ 3] 10.0-12.0 sec 2.12 MBytes 8.91 Mbits/sec
i have already tried despooling options, but i'm not seeing any improvement. Does anyone knows a way or a script to test performance of bacula and how to improve it ? i can try to reduce the FileSets the more possible just to backup what we really need, but still be a lot of files / gygas.
This is the Bacula-Dir.conf if it helps :
Director # define myself
Name = bacula-dir
DIRport = 9101 # where we listen for UA connections
DirAddress = 172.19.120.106
QueryFile = "/etc/bacula/query.sql"
WorkingDirectory = "/var/spool/bacula"
PidDirectory = "/var/run"
Maximum Concurrent Jobs = 10
Password = "123456" # Console password
Messages = Daemon
Heartbeat Interval = 1
Storage
Name = File
# Do not use "localhost" here
Address = 172.19.120.106 # N.B. Use a fully qualified name here
SDPort = 9103
Password = "123456"
Device = FileStorage
Media Type = File
Pool
Name = File
Pool Type = Backup
Label Format = OpenCloud-
Recycle = yes # Bacula can automatically recycle Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 365 days # one year
Maximum Volume Bytes = 100G # Limit Volume size to something reasonable
Maximum Volumes = 350 # Limit number of Volumes in Pool
I think this info is of use, but if any case you need more info of some service or need me to do something, i'll gladly provide the data needed... i'm really in need to increase the transfer rate of this solution.
linux backup bacula
linux backup bacula
asked Nov 12 '18 at 19:47
J. MorettiJ. Moretti
11
11
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
i've realized i was using MySql as a DB backend, and was having troubles with the bulks of insert data. So i made a fresh install again, but this time using Postgresql as database, and reached almost 40 mb/s as transfer rate . great improve
I'll mark my own answer as the correct
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53269100%2fmy-bacula-transfer-rate-its-extremely-low-how-can-i-configure-the-director-to%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
i've realized i was using MySql as a DB backend, and was having troubles with the bulks of insert data. So i made a fresh install again, but this time using Postgresql as database, and reached almost 40 mb/s as transfer rate . great improve
I'll mark my own answer as the correct
add a comment |
i've realized i was using MySql as a DB backend, and was having troubles with the bulks of insert data. So i made a fresh install again, but this time using Postgresql as database, and reached almost 40 mb/s as transfer rate . great improve
I'll mark my own answer as the correct
add a comment |
i've realized i was using MySql as a DB backend, and was having troubles with the bulks of insert data. So i made a fresh install again, but this time using Postgresql as database, and reached almost 40 mb/s as transfer rate . great improve
I'll mark my own answer as the correct
i've realized i was using MySql as a DB backend, and was having troubles with the bulks of insert data. So i made a fresh install again, but this time using Postgresql as database, and reached almost 40 mb/s as transfer rate . great improve
I'll mark my own answer as the correct
answered Nov 15 '18 at 12:27
J. MorettiJ. Moretti
11
11
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53269100%2fmy-bacula-transfer-rate-its-extremely-low-how-can-i-configure-the-director-to%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown