vrijdag 30 juli 2021

[UPDATE] Sigil Portable v1.7.0

scr5e3Lgm.png

Sigil is a tabbed editor for the EPUB format commonly used in eBooks. The program includes basic formatting, customizable fonts and styles, hotkeys and spellcheck. You can generate a table of contents, embed files (audio, video, or other), edit/add metadata, rearrange or split pages, add vector (SVG) images and available "clips" for frequently entered text.

As the format is based on simple HTML, the program includes a WYSIWYG-style book vs. code view, the ability to modify cascading style sheets and additional functionality via plugins. The program can even shrink some eBooks by discarding unused media elements.

Cross-platform with clients for Linux, Mac and supports many languages. Sigil Portable is a wrapper version of the program and X-Sigil is also available.

pfc?d=UT3xtbGYFzA pfc?i=KQHCcLuiheo:TjhZ-L417X0:V_sGLiPBpW pfc?d=qj6IDK7rITs pfc?i=KQHCcLuiheo:TjhZ-L417X0:gIN9vFwOqv pfc?i=KQHCcLuiheo:TjhZ-L417X0:F7zB   nMyn0L

New article matched for in your Blogger! rule

Testing Wide Production 2020

pg_timetable v4 is out!

Our team is proud to introduce new major pg_timetable v4 with the new documentation, configuration file support, reimplemented logging machinery, job and task timeouts support, new CopyFromFile built-in functionality, and many more!

Please, use our new detailed manual to know more about new features and settings.

Download

You're welcome to download the pg_timetable v4 right now at: https://github.com/cybertec-postgresql/pg_timetable/releases

Feedback

Please don't hesitate to ask any questions, to report bugs, to star pg_timetable project and to tell the world about it.

Changelog ๐Ÿ’”๐ŸŒ„๐Ÿงช

  • [!] add configuration file support with Viper, closes #177 bebab44
  • [!] add CopyFromFile built-in task f87d6fc
  • [!] add Readthedocs documentation (#254) 004b31f
  • [!] merge timetable.command table with timetable.task, closes #261 8604b18
  • [!] reimplement logging, closes #158 (#231) 4313948
  • [!] remove jmoiron/sqlx and DATA-DOG/go-sqlmock dependencies, closes #187 #202 6542b71
  • [!] remove old migrations and start from scratch, closes #258 020563d
  • [!] rewrite cron handling from scratch 35a8cc8 fbfbfb2
  • [!] rewrite pgengine and scheduler without global variables as classes fa37167
  • [!] use Go 1.16 to build releases 729ef31
  • [!] use new consistent terminology: command -> task -> chain f59fdda
  • [+] add $PGTT_CLIENTNAME env var 31faae6
  • [+] add --cronworkers and --intervalworkers options under "Resource" group 91f5c0e
  • [+] add .pgpass support, closes #247 d3a317f
  • [+] add --chain-timeout command-line parameter, closes #270 7f27a50
  • [+] add --log-database-level command-line parameter, closes #274 338c28c
  • [+] add --task-timeout command line parameter 80428a7
  • [+] add all release badge 0b1ae61
  • [+] add chain timeout, closes #267 05b9736
  • [+] add config.example.yaml file bebab44
  • [+] add config_test 5df8386
  • [+] add database comments for objects f59fdda
  • [+] add docs badge e325ff5
  • [+] add high load skip timeout to LogHook 0513ba8
  • [+] add log hook for PostgreSQL using COPY machinery 93d51cc
  • [+] add LogHook tests 0513ba8
  • [+] add output for built-in and SQL tasks to the timetable.execution_log, closes #185 (#224) 681caf3
  • [+] add pgengine.NewDB function 330cb62
  • [+] add pgxpoolIface 33fa7a4
  • [+] add support for logging to file, closes #272 6a73a80
  • [+] add supported cloud environments to the readme, #256 70c9f49
  • [+] add supported PostgreSQL versions and operating systems to the readme, closes #256 5695742
  • [+] add task timeout, closes #271 80428a7
  • [+] add TASK_STARTED and TASK_DONE statuses, rename STARTED to CHAIN_STARTED 660e32b
  • [+] add TestMigratorOptions() and increase TestMigrateTxError() coverage 466c909
  • [+] add TestSchedulerExclusiveLocking() 08e7ff9
  • [+] add TestSelectChains() 3696f01
  • [+] add time zone information to the manual 77c0237
  • [+] add version number to all release files, closes #228 cf72721
  • [+] allow specify content-type for SendMail built-in task, closes #225 (#226) 100bedd
  • [+] bump github.com/pashagolub/pgxmock 1.2.0 af98bfd
  • [+] bump github.com/spf13/viper to 1.8.1 e7b30fd
  • [+] bump jackc/pgconn version to 1.9.0 7f2d671
  • [+] bump jackc/pgtype version to 1.8.0 7f2d671
  • [+] bump jackc/pgx version to 4.12.0 7f2d671
  • [+] bump jessevdk/go-flags version to 1.5.0 7f2d671
  • [*] bump georgysavva/scany to 0.2.9 c1f9529
  • [+] create Dependabot config file 37729d7
  • [+] delete only succeeded self-destructive chains, closes #265 613a945
  • [+] increase TestMigrations() coverage 975d68c
  • [+] Increase v4 tests coverage (#222) 9689e50
  • [+] insert run status immediately during max_instance check, closes #223 5765662
  • [+] introduce PgxIface, PgxConnIface, PgxPoolIface b028eaa
  • [+] move cache settings to LogHook 0513ba8
  • [+] set client name during LogHook creation 0513ba8
  • [+] specify password for tests explicitly 524046f
  • [+] use //go:embed for migration .sql files aaee11d
  • [+] use //go:embed for pgengine .sql files b453937
  • [+] use retcode and deferred functions instead of os.Exit() 7a1cdfa
  • [*] change "--port" command-line option type to integer bebab44
  • [*] decrease run_status rows usage by using only task-related information 660e32b
  • [*] improve and rename get_running_jobs() to get_chain_running_statuses() 7a1cdfa
  • [*] improve TestExecuteSQLTask() 396cc88
  • [*] improve timetable.run_status table 7a1cdfa
  • [*] make go test fail fast in the build action 35a8cc8
  • [*] make pgengine.NewDB() and config.NewCmdOptions() use variadic string params 524046f
  • [*] move health_check() function to job_functions.sql 7a1cdfa
  • [*] move Logger to appropriate file b5bcece
  • [*] move PgURL parsing to the pgengine bebab44
  • [*] move SetupCloseHandler to main.go 7a1cdfa
  • [*] remove sensitive information from logs, closes #286 aba954d
  • [*] remove unused chain.excluded_execution_configs column f59fdda
  • [*] remove unused PgEngine.CanProceedChainExecution() 4cf2323
  • [*] remove unused timetable.trig_chain_fixer(), closes #255 5b033d7
  • [*] rename pgengine.UpdateChainRunStatus to AddChainRunStatus 660e32b
  • [*] rename rus_status.current_execution_element column to command_id 660e32b
  • [*] replace "--verbose" command-line option with "--loglevel" bebab44
  • [*] return immediately from pgengine.CanProceedChainExecution if context expired 34946b8
  • [*] simplify pgengine.CanProceedChainExecution() function 7a1cdfa
  • [*] simplify readme.md, #256 b7cc5bf
  • [*] split options into groups: Connection, Logging, Start, etc. bebab44
  • [*] store remote database connection strings in chain table directly, closes #234 20f28f8
  • [*] support alpha-beta strings in tag name for Release action e7318a8
  • [*] switch to ory/mail from abandoned gomail, closes #248 21858fd
  • [*] update Golang version used in Github Actions 944b903
  • [*] update latest release badge by including pre-releases 8645ee0
  • [*] use channel for error instead of variable 0513ba8
  • [*] use dashes in long command-line parameters names 6a73a80
  • [*] uses error log level during tests by default 524046f
  • [-] fix 'date/time field value out of range' error in next_run(), fixes #237 35a8cc8
  • [-] fix --pgurl ignored during connection, closes #252 5d771df
  • [-] fix empty long dash separated command-line parameters, fixes #279 4e8016f
  • [-] fix ErrNoRows check in CanProceedChainExecution() f0701c4
  • [-] fix SelectChain() 8b802c3
  • [-] remove database/sql from import eeb3eb4
  • [-] remove STRICT option from add_job() function, fixes #291 2eff73a
  • [-] remove unneeded logging CheckNeedMigrateDb() function f59fdda

New article matched for in your Blogger! rule

Testing Wide Production 2020

woensdag 28 juli 2021

Egor Rogov: Locks in PostgreSQL: 4. Locks in memory

To remind you, we've already talked about relation-level locks, row-level locks, locks on other objects (including predicate locks) and interrelationships of different types of locks.

The following discussion of locks in RAM finishes this series of articles. We will consider spinlocks, lightweight locks and buffer pins, as well as events monitoring tools and sampling.

locks5-en.png

...

New article matched for in your Blogger! rule

Testing Wide Production 2020

dinsdag 27 juli 2021

Patching all my environments with the July 2021 Patch Bundles

Via Upgrade your Database – NOW! by Mike.Dietrich

It's patching day again. Hurray! Or not. I realize that at patching day, the 19c bundles are all missing. So wrote this blog post a bit after the usual release day. In my case this will include Oracle 19.12.0 RU and the July 2021 RU for Oracle 12.2.0.1. Please find the details about Patching all my environments with the July 2021 Patch Bundles below.

As usual, an important annotation upfront: I patch in-place due to space issues. But in reality, you please patch always out-of-place with a separate home. Please see this blog post

The post Patching all my environments with the July 2021 Patch Bundles appeared first on Upgrade your Database - NOW!.

New article matched for in your Blogger! rule

Testing Wide Production 2020

vrijdag 23 juli 2021

maandag 19 juli 2021

Haringvliet kleurt langzaam geel door overstromingen in Limburg en Duitsland

?appId=21791a8992982cd8da851550a453bd7f&

Het water dat afgelopen weekend voor puinhopen zorgde in Limburg en Duitsland, verlaat Nederland bijna. En dat levert een bijzonder gezicht op in het Haringvliet. Het water tussen Hellevoetsluis en Stellendam, het laatste stukje vรณรณr de Noordzee, kleurt langzaamaan geel door alle modder die het water meesleurde in zijn wilde tocht.

New article matched for in your Blogger! rule

Testing Wide Production 2020

woensdag 14 juli 2021

Create your summer reading list with Google Play Books

Every summer my family spends a week at a lake house, boating, barbecuing and reading books. My wife and I each choose a few books to share in our family library and read them at the same time on our devices. It gives us a chance to introduce each other to new genres and authors. I typically read thrillers and sci-fi, while my wife prefers a  mix of literary fiction and celebrity memoirs. 

This year, we used the Google Play Books Android app's new features to discover and organize our vacation reading list. Here's how we're using Google Play Books for our very informal summer book club: 



Customize your bookshelves

One top request from Play Books readers was for customizable shelves.  With custom shelves, you can organize and sort your books into themed collections.  We made a shelf for our must-read ebooks for "Summer 2021."  You can also designate your " All-time favorites'' so you always  have a list of recommendations ready. Create a "Family listening" shelf for the audiobooks you're saving for a family road trip. Custom shelves make it easy to find the right book at the right time.


Use filters to find your ideal read

Readers also told us that they wanted an easy way to find titles for specific reading needs. Now you can browse by filters like language (to find titles written in a specific language), price range (to see books in your budget) and price drop (to see discounted books).


Find deals on the books you want to read

Customized discount notifications in the Android app now help you find more deals. If you sample or wishlist a book, you'll receive an email if that title is discounted in the future (just make sure you're opted in to marketing emails from Google Play). Take advantage of this feature by wishlisting the titles that interest you when you come across them in the Play Books app.  

With custom shelves, store filtering and deal alerts, you have even more ways to find your next great read on Google Play Books. To get you started, here's our "Summer 2021" shelf:

A custom shelf featuring six books: The Last Thing He Told Me by Laura Dave, Later by Stephen King, What's Mine and Yours by Naima Coster, The Code Breaker by Walter Isaacson, A Master of Djinn by P. Djรจlรญ Clark, and Billie Eilish: In Her Own Words by Billie Eilish.

Hopefully that gives you a little inspiration in creating your own. Here's to a well-read summer.

New article matched for in your Blogger! rule

Testing Wide Production 2020

vrijdag 9 juli 2021

Visual Studio Code June 2021

Via Visual Studio Code - Code Editing. Redefined. by Visual Studio Code Team

release-highlights.png

Visual Studio Code June 2021

Read the full article

New article matched for in your Blogger! rule

Testing Wide Production 2020

maandag 5 juli 2021

[UPDATE] SavageEd2 v0.6.03

screpk4r5.png

SavageEd2 is a very small and fast Notepad replacement. It can edit files of any size, limited only by memory. It can encrypt text files using AES encryption with 256 bit key (max 32 character password) in .enc format. It has Pattern searching features and written in x86 assembly language (HLA).

Note: SavageEd2 is the replacement for the former SavageEd.

pfc?d=UT3xtbGYFzA pfc?i=QBgQf2T3Jxk:jkV4_tkLvx8:V_sGLiPBpW pfc?d=qj6IDK7rITs pfc?i=QBgQf2T3Jxk:jkV4_tkLvx8:gIN9vFwOqv pfc?i=QBgQf2T3Jxk:jkV4_tkLvx8:F7zB   nMyn0L

New article matched for in your Blogger! rule

Testing Wide Production 2020

vrijdag 2 juli 2021

Gilles Darold: Ora2PG now supports oracle_fdw to increase the data migration speed

It has been 20 Years since i have been maintaining the Ora2Pg project, an Open Source software for Oracle to PostgreSQL migrations. The first version of Ora2Pg released on 9th May, 2001. Since then, there have been several features related to schema conversions and data migrations. Over a period of time, i have witnessed several tens of thousands of migrations using Ora2Pg that also increased the need of several optimizations. One of such features is a faster data migration by copying multiple tables in parallel followed by parallel jobs to migrate each table. To optimize it to the best speed possible, the version 22.0 of Ora2Pg now supports the Foreign Data Wrapper, oracle_fdw, to increase the data migration speed. This is particularly useful for tables with BLOB because data needs a transformation to bytea that was known to be slow with Ora2Pg and faster with the C implementation in oracle_fdw.

Installation

In order to use Ora2Pg with oracle_fdw, you need to install both Ora2Pg and oracle_fdw. Oracle_fdw has to be installed on the PostgreSQL server to which you are migrating the data to. Ora2Pg can be installed on the PostgreSQL server or any intermediate server between Oracle and PostgreSQL servers. To install the oracle_fdw extension, please following these Installation steps. To install Ora2Pg, please following the steps in this Installation manual of the official Ora2Pg documentation.

Ora2Pg Configuration

Upon installing the Oracle instant client, the ORACLE_HOME has been set as following in the ora2pg.conf configuration file.

ORACLE_HOME    /u01/app/instantclient_12_2
Connection to Oracle

In this example, I have an Oracle 18c instance running on the host : 192.168.1.37, with a portable database (PDB) called pdb1, where the sample schema : HR has been loaded.

ORACLE_DSN dbi:Oracle:host=192.168.1.37;service_name=pdb1;port=1521
ORACLE_USER HR
ORACLE_PWD hrpwd

Once you have set the Oracle database DSN, you can execute ora2pg to see if it works successfully.

$ ora2pg -t SHOW_VERSION -c config/ora2pg.conf
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0

Getting to this step successfully is the hardest part of implementing Ora2Pg when you are a novice because of the relative complexity of the installation process. Reading the documentation carefully will normally get you there easily.

Configuration before starting Ora2Pg for the Schema and Data Migration

I want to export the content of the Oracle HR schema into the hr schema in PostgreSQL and set the owner of all objects to the user : gilles. I also want to preserve the case of the Oracle object name. For this purpose, i have set the following configuration in the ora2pg.conf file.

SCHEMA       HR
EXPORT_SCHEMA 1
PG_SCHEMA hr
FORCE_OWNER gilles
PRESERVE_CASE 1

Note that Ora2Pg 22.0 needs these settings to be able to use data export through oracle_fdw. Short coming release 22.1 will allow the use of oracle_fdw without preserving case and schema export.

Migrating Schema using Ora2Pg

Before starting the data migration, we need to schema to be created in the PostgreSQL database. Ora2Pg can be used to perform the schema migration from Oracle to PostgreSQL. The Oracle schema as extracted using the ORACLE_DSN and the configuration specified in the ora2pg.conf file will be converted to a PostgreSQL specific syntax, with an appropriate data type mapping. In this example, we will just export the tables definition and create the same tables in PostgreSQL. We can create a directory in which the converted PostgreSQL DDL can be stored.

$ mkdir tables
$ ora2pg -t TABLE -p -c config/ora2pg.conf -b tables
[========================>] 7/7 tables (100.0%) end of scanning.
Retrieving table partitioning information…
[========================>] 7/7 tables (100.0%) end of table export.

As seen in the above log, the DDL export converted to PostgreSQL syntax has been created in file tables/output.sql. We can now create the PostgreSQL database named : hr that is owned by the user : gilles, and import the DDL definitions generated in the previous step.

$ createdb hr -O gilles
$ psql -d hr -U gilles -f tables/output.sql
Data Migration using Ora2Pg

Firstly, configure Ora2Pg to perform data migration on the fly instead of dumping the Oracle data to files followed by importing them using psql. To enable this behavior of direct data migration, we need to set the PG_DSN (PostgreSQL data source) to connect to PostgreSQL.

PG_DSN          dbi:Pg:dbname=hr;host=localhost;port=5432
PG_USER gilles PG_PWD gilles

You need the Perl module DBD::Pg be installed, all Linux distributions have their own package for this library, otherwise you can find the sources on CPAN.

We can then proceed with the data migration using the following step as an example :

$ ora2pg -t COPY -c config/ora2pg.conf 
[========================>] 7/7 tables (100.0%) end of scanning.
[========================>] 25/25 rows (100.0%) Table COUNTRIES (25 recs/sec)
[==> ] 25/215 total rows (11.6%) - (1 sec., avg: 25 recs/sec).
[========================>] 27/27 rows (100.0%) Table DEPARTMENTS (27 recs/sec)
[=====> ] 52/215 total rows (24.2%) - (1 sec., avg: 52 recs/sec).
[========================>] 107/107 rows (100.0%) Table EMPLOYEES (107 recs/sec)
[=================> ] 159/215 total rows (74.0%) - (1 sec., avg: 159 recs/sec).
[========================>] 19/19 rows (100.0%) Table JOBS (19 recs/sec)
[===================> ] 178/215 total rows (82.8%) - (2 sec., avg: 89 recs/sec).
[========================>] 10/10 rows (100.0%) Table JOB_HISTORY (10 recs/sec)
[====================> ] 188/215 total rows (87.4%) - (2 sec., avg: 94 recs/sec).
[========================>] 23/23 rows (100.0%) Table LOCATIONS (23 recs/sec)
[=======================> ] 211/215 total rows (98.1%) - (2 sec., avg: 105 recs/sec).
[========================>] 4/4 rows (100.0%) Table REGIONS (4 recs/sec)
[========================>] 215/215 total rows (100.0%) - (2 sec., avg: 107 recs/sec).
[========================>] 215/215 rows (100.0%) on total estimated data (2 sec., avg: 107 recs/sec)
Data Migration using Ora2Pg with oracle_fdw

This is done just by giving a name for the foreign server to be created. Everything will be automatic based on the configuration we have already set. In ora2pg.conf just set the following configuration –

  FDW_SERVER      orcl

As i have already done the data migration in the previous step, I have instructed Ora2Pg to truncate the PostgreSQL tables before migrating the data.

TRUNCATE_TABLE    1

I have then started the data migration using the FDW_SERVER parameter to enable data migration using oracle_fdw.

$ ora2pg -t COPY -c config/ora2pg.conf
[========================>] 7/7 tables (100.0%) end of scanning.
[========================>] 7/7 tables (100.0%) end of table export
NOTICE: schema "ora2pg_fdw_import" does not exist, skipping
[========================>] 215/215 rows (100.0%) on total estimated data (2 sec., avg: 107 recs/sec)

That's it.

After the migration, Ora2pg will leave a schema named ora2pg_fdw_import containing all foreign tables created to migrate the data. To cleanup the ora2pg_fdw_import schema and the mapping objects created in it, use the following SQL command.

DROP SCHEMA ora2pg_fdw_import CASCADE ;

And to remove the oracle_fdw extension with foreign server and user mapping created by Ora2Pg, we can use the following command.

DROP EXTENSION oracle_fdw CASCADE ;
Data Migration Performance

The timings reported in the above examples are not representative of a real data migration speed because of the number of rows and the systems hosting the databases. Let us consider a table with more data, so that we can see the timing differences.

Taking a table of size : 30GB with 100,000,000 rows with a geometry column thing is a bit different. With a data export with oracle_fdw through a single process we obtain the following results.

$ ora2pg -t COPY -c config/ora2pg.conf -a TABLE_TEST
[========================>] 1/1 tables (100.0%) end of scanning.
NOTICE: user mapping for "gilles" already exists for server "orcl", skipping
[========================>] 1/1 tables (100.0%) end of table export.
NOTICE: drop cascades to 7 other objects
DETAIL: drop cascades to foreign table ora2pg_fdw_import."COUNTRIES"
drop cascades to foreign table ora2pg_fdw_import."DEPARTMENTS"
drop cascades to foreign table ora2pg_fdw_import."EMPLOYEES"
drop cascades to foreign table ora2pg_fdw_import."JOBS"
drop cascades to foreign table ora2pg_fdw_import."JOB_HISTORY"
drop cascades to foreign table ora2pg_fdw_import."LOCATIONS"
drop cascades to foreign table ora2pg_fdw_import."REGIONS"
[========================>] 100000000/100000000 rows (100.0%) on total estimated data (4183 sec., avg: 23906 recs/sec)

23,900 tuples per seconds is not so bad for an export from an Oracle database hosted on a VirtualBox machine with 4 cores and 6 GB of memory but this only uses a single process to export the data.

Let's try to use 4 cores to export the data and see the performance. This test table does not have a primary key or a unique index so we need to give one column to Ora2Pg that can be used to balance the data export on the 4 processes. I have to edit ora2pg.conf to give a numeric column with unique numbers as follow :

DEFINED_PK      TABLE_TEST:ACQTN_SEQ_NO

Column ACQTN_SEQ_NO of table TABLE_TEST is filled by a sequence value, so we can expect a good re-partition of the data based on modulo, even if there is no unique constraint.

Here are the new results using Oracle data extraction on 4 connections (-J 4) :

$ ora2pg -t COPY -c config/ora2pg.conf -a TABLE_TEST -J 4
[========================>] 1/1 tables (100.0%) end of scanning.
NOTICE: user mapping for "gilles" already exists for server "orcl", skipping
[========================>] 1/1 tables (100.0%) end of table export.
NOTICE: drop cascades to foreign table ora2pg_fdw_import."TABLE_TEST"
[========================>] 100000000/100000000 total rows (100.0%) - (1250 sec., avg: 80000 recs/sec), TABLE_TEST
[========================>] 100000000/100000000 total rows (100.0%) - (1250 sec., avg: 80000 recs/sec), TABLE_TEST
[========================>] 100000000/100000000 total rows (100.0%) - (1250 sec., avg: 80000 recs/sec), TABLE_TEST
[========================>] 100000000/100000000 total rows (100.0%) - (1250 sec., avg: 80000 recs/sec), TABLE_TEST
[========================>] 100000000/100000000 rows (100.0%) on total estimated data (1250 sec., avg: 80000 tuple s/sec)

So 80,000 tuples per seconds is far better. Of course with better hardware and 16 or 32 cores we could have a higher throughput.

Running the export the same way but without the use of oracle_fdw gives the following results.

ora2pg -t COPY -c config/ora2pg.conf -a TABLE_TEST -J 4 -j 2
[========================>] 1/1 tables (100.0%) end of scanning.
[======> ] 25000000/100000000 rows (25.0%) Table TABLE_TEST-part-0 (3032 sec., 8245 recs/sec)
[======> ] 25000000/100000000 rows (25.0%) Table TABLE_TEST-part-2 (3031 sec., 8248 recs/sec)
[======> ] 25000000/100000000 rows (25.0%) Table TABLE_TEST-part-3 (3031 sec., 8248 recs/sec)
[======> ] 25000000/100000000 rows (25.0%) Table TABLE_TEST-part-1 (3031 sec., 8248 recs/sec)
[========================>] 100000000/100000000 rows (100.0%) Table TABLE_TEST (3032 sec., 32981 recs/sec)
[========================>] 100000000/100000000 rows (100.0%) on total estimated data (3033 sec., avg: 32970 tuples/sec)

Ora2Pg parallel migration using 4 jobs but without oracle_fdw, migrates on an average of 32,970 tuples per second compared to 80,000 tuples/sec with oracle_fdw. So the performance gain is more than double.

Ora2PG Data Migration performance with Oracle_FDW

Conclusion

As seen in the above chart, the performance on Ora2Pg with oracle_fdw can be more than double when compared with Ora2Pg without oracle_fdw. To use this feature, you must have the latest version of Ora2Pg installed. Also, please note that this feature may not be available for you when you are migrating data to a PostgreSQL database that does not support oracle_fw. For example, Amazon RDS/Aurora and some more cloud providers do not offer oracle_fdw as a supported extension. In such a case, please contact MigOps to know how we can help you optimize the data migration speed while migrating to Cloud.

Looking for support on Migrations to PostgreSQL ? Please fill the following form.

Please enable JavaScript in your browser to complete this form.
  • Need an experts advice while migrating to PostgreSQL ?
  • Need support in Optimizing your PostgreSQL databases ?
  • Looking to get your team trained in PostgreSQL ?
  • Are you looking for a Remote Database Adminitrator to setup your PostgreSQL databases for Production ?
Please share your email address and we will contact you soon *
Submit

The post Ora2PG now supports oracle_fdw to increase the data migration speed appeared first on MigOps.

New article matched for in your Blogger! rule

Testing Wide Production 2020