maandag 21 juni 2021

[UPDATE] SmartSystemMenu v2.5.1

scrpi2FjV.png

SmartSystemMenu enables a variety of functionality to all windows on your machine. When you click the top-left corner of any window, you'll see available visual effects such as resizing, transparency, alignment (such as moving to the lower-right corner). Window movement functions include as sending to tray, staying on top, and the ability to drag the window without using the top bar ("drag-by-mouse").

The program also supports system functions such as window information, saving a screenshot, copy text to clipboard, and process priority. An available launcher menu can be customized, and SmartSystemMenu can be disabled for individual programs.

pfc?d=UT3xtbGYFzA pfc?i=i-HXx6yhLw0:ROHSBnuCYGk:V_sGLiPBpW pfc?d=qj6IDK7rITs pfc?i=i-HXx6yhLw0:ROHSBnuCYGk:gIN9vFwOqv pfc?i=i-HXx6yhLw0:ROHSBnuCYGk:F7zB   nMyn0L

New article matched for in your Blogger! rule

Testing Wide Production 2020

zaterdag 19 juni 2021

pgMustard version 4

Via To Save by noreply@blogger.com (Tom)

We're pleased to announce version 4 of pgMustard – a tool that helps people review PostgreSQL query plans quickly.

Major improvements:

  • Plan publishing – Plans can now be published (publicly) for sharing externally or internally, as long as they don't contain sensitive information.
  • Tip improvements – New tips for bad read speeds and trigger timings, as well as small improvements to many of the existing tips, to make them clearer and better scored.
  • Interface improvements – dedicated sections for Buffers, JIT, and Settings information, when provided, can now be found in the reading pane on the left.

Requirements:

    JSON format query plans from PostgreSQL 9.6 or newer
  • The interface and advice are in English
  • A GitHub or Google account, for authentication on our hosted version (no installation or database access required)
  • Self-hosted option available, via Docker

pgMustard is commercial software and has a free trial.

If you spend time reviewing query plans, we'd love to hear your thoughts.

Cheers,

Michael and Dave

https://www.pgmustard.com/

email-logo.png email-logo-light.png

New article matched for in your Blogger! rule

Inoreader. Take back control of your news feed. Follow us on Twitter and Facebook.

Testing Wide Production 2020

New article matched for in your Blogger! rule

Testing Wide Production 2020

pgMustard version 4

We're pleased to announce version 4 of pgMustard – a tool that helps people review PostgreSQL query plans quickly.

Major improvements:

  • Plan publishing – Plans can now be published (publicly) for sharing externally or internally, as long as they don't contain sensitive information.
  • Tip improvements – New tips for bad read speeds and trigger timings, as well as small improvements to many of the existing tips, to make them clearer and better scored.
  • Interface improvements – dedicated sections for Buffers, JIT, and Settings information, when provided, can now be found in the reading pane on the left.

Requirements:

  • JSON format query plans from PostgreSQL 9.6 or newer
  • The interface and advice are in English
  • A GitHub or Google account, for authentication on our hosted version (no installation or database access required)
  • Self-hosted option available, via Docker

pgMustard is commercial software and has a free trial.

If you spend time reviewing query plans, we'd love to hear your thoughts.

Cheers,

Michael and Dave

https://www.pgmustard.com/

New article matched for in your Blogger! rule

Testing Wide Production 2020

donderdag 17 juni 2021

The Most Popular Databases – 2006/2021 - Update May 2021

Via PostgreSQL by /u/cuffia_azzurra_2

nyjdR4dYW-PaYWUtnc_fuJf_4XzSZ5KhUJr8Qm9P

submitted by /u/cuffia_azzurra_2
[link] [comments]

New article matched for in your Blogger! rule

Testing Wide Production 2020

dinsdag 15 juni 2021

Politie weer bereikbaar: storing bij 0900-8844 opgelost

?appId=21791a8992982cd8da851550a453bd7f&

In delen van het land kampte de politie in de afgelopen nacht met een technische storing bij het telefoonnummer voor niet-spoedeisende zaken, 0900-8844. Het nummer was daardoor tijdelijk niet bereikbaar. Inmiddels is het nummer weer in de lucht.

New article matched for in your Blogger! rule

Testing Wide Production 2020

donderdag 10 juni 2021

SQLcl autotrace is way more than “just” (SQL*Plus) autotrace

Via Martins Blog by Martin Bach

ef5fa3199a324585218a2d8daa686248?s=200&t

As part of the research I did for the 2nd edition of our Exadata book I used session statistics quite heavily. Session statistics can provide additional insights in situations where the wait interface on its own doesn't get you any further.

On the command line, my tools of choice are either Tanel Poder's excellent Session Snapper or Adrian Billington's mystats utility. There are of course others, it just so happened that I've been using the previous two quite heavily. If you prefer a graphical front-end such as SQL Developer, you can grab the change in session statistics easily as described in an earlier post.

I'm not sure why it never occurred to me to check if SQL Developer's best friend, sqlcl, has the same functionality. Well it does, and that's great news. The other great news is that you can download sqlcl directly from Oracle's website now.

Downloading SQLcl the new way

The first step to complete is to pull the latest version of sqlcl. Many thanks to Connor McDonald who told me there is a super-easy way to always get the latest version. Just use the following URL:

https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip

Great! I don't even need to worry about the version:

$ wget https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip    --2021-06-10 14:37:29--  https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip    Resolving download.oracle.com (download.oracle.com)... 104.76.200.85    Connecting to download.oracle.com (download.oracle.com)|104.76.200.85|:443... connected.    HTTP request sent, awaiting response... 200 OK    Length: 36960602 (35M) [application/zip]    Saving to: 'sqlcl-latest.zip'        sqlcl-latest.zip          100%[====================================>]  35.25M  6.35MB/s    in 5.7s            2021-06-10 14:37:35 (6.19 MB/s) - 'sqlcl-latest.zip' saved [36960602/36960602]

Note that I'm downloading sqlcl-latest.zip, which translates to 21.1.1.113.1704 at the time of writing:

$ cd sqlcl/bin && ./sql -V    SQLcl: Release 21.1.1.0 Production Build: 21.1.1.113.1704

With the latest sqlcl release unzipped it's time to grab some session statistics.

Session Statistics, please!

To keep matters reasonably simple I'm using a non-partitioned table for my demonstration. As always, my examples use the excellent Swingbench (Order Entry) benchmark schema. Once connected to the database, I can start investigating. By the way I'm running this query on Oracle 19c in a VM on my laptop.

The first step is to enable auto-trace as shown here:

SQL> show version    Oracle SQLDeveloper Command-Line (SQLcl) version: 21.1.1.0 build: 21.1.1.113.1704    SQL> help set autotrace    SET AUTOTRACE      SET AUTOT[RACE] {OFF|ON|TRACE[ONLY]}    SQL> set autotrace on     Autotrace Enabled    Shows the execution plan as well as statistics of the statement.

Let's run a query against the orders table:

SQL> select count(*) from soe.orders;           COUNT(*)     ___________        17157480         Explain Plan    -----------------------------------------------------------    Plan hash value: 508073128                                                                                                                                  ----------------------------------------------------------------------        | Id  | Operation          | Name    | Rows  | Cost (%CPU)| Time     |        ----------------------------------------------------------------------        |   0 | SELECT STATEMENT   |         |     1 | 62161   (1)| 00:00:03 |        |   1 |  SORT AGGREGATE    |         |     1 |            |          |        |   2 |   TABLE ACCESS FULL| ORDERS  |    17M| 62161   (1)| 00:00:03 |        ----------------------------------------------------------------------            Statistics    -----------------------------------------------------------                 103  CPU used by this session                 104  CPU used when call started              209348  Cached Commit SCN referenced                   4  Commit SCN cached                 510  DB time             4171496  Effective IO time                3587  Number of read IOs issued                  42  Requests to/from client                  42  SQL*Net roundtrips to/from client                   2  buffer is not pinned count                 587  bytes received via SQL*Net from client               79087  bytes sent via SQL*Net to client                   3  calls to get snapshot scn: kcmgss                   8  calls to kcmgcs          1870921728  cell physical IO interconnect bytes                   4  cleanout - number of ktugct calls                   4  cleanouts only - consistent read gets                   4  commit txn count during cleanout              228394  consistent gets              228384  consistent gets direct                   4  consistent gets examination                   4  consistent gets examination (fastpath)                  10  consistent gets from cache                   6  consistent gets pin                   6  consistent gets pin (fastpath)                   1  cursor authentications                   2  enqueue releases                   2  enqueue requests                   2  execute count             6884470  file io wait time                   4  immediate (CR) block cleanout applications               81920  logical read bytes from cache              228380  no work - consistent read gets                7233  non-idle wait count                 418  non-idle wait time                   2  opened cursors cumulative                   1  opened cursors current                   1  parse count (hard)                   2  parse count (total)                   1  parse time elapsed                3587  physical read IO requests          1870921728  physical read bytes                3587  physical read total IO requests          1870921728  physical read total bytes                3578  physical read total multi block requests              228384  physical reads              228384  physical reads direct                   6  process last non-idle time                   1  recursive calls              228394  session logical reads                   1  sorts (memory)                2010  sorts (rows)              228384  table scan blocks gotten            17157480  table scan disk non-IMC rows gotten            17157480  table scan rows gotten                   1  table scans (direct read)                   1  table scans (long tables)                 418  user I/O wait time                  43  user calls    SQL> 

This is pretty nifty in my opinion. There's an execution plan and a whole raft of relevant session statistics. It would be even better if sqlcl displayed a-rows and e-rows, but you can use Tanel's tpt-oracle scripts for that ;)

Happy Troubleshooting!

New article matched for in your Blogger! rule

Testing Wide Production 2020

Script to Get the Most CPU Consuming Sessions from Oracle Database

Via dba topics by noreply@blogger.com (Shony)
SQL Script This SQL query gives you output of 10 most CPU consuming oracle sessions.Set pages 1000 lines 222 col program form a50 heading "Program" col CPUMins form 99990 heading "CPU in Mins" select rownum as rank, a.* from ( SELECT v.sid, program, v.value / (100 * 60) CPUMins FROM v$statname s , v$sesstat v, v$session sess WHERE s.name = 'CPU used by this session' and sess.sid = v.sid and

New article matched for in your Blogger! rule

Testing Wide Production 2020

woensdag 9 juni 2021

Disabling autocommit in PostgreSQL can damage your health

Real life is running in autocommit mode
© Laurenz Albe 2021

When analyzing customer's problems, I have seen the mess you can get into if you disable autocommit in your interactive client, so I'd like to bring this topic to a wider audience.

What is autocommit?

In PostgreSQL, like in any other ACID-complicant database, each statement runs in a transaction: if the statement causes an error, PostgreSQL undoes all its effects.

You can start transactions explicitly using the START TRANSACTION statement (or BEGIN in PostgreSQL). However, if you don't do that, different RDBMS' behave differently:

  • some, like Oracle, implicitly start a transaction when you issue the first statement, and you have to run a COMMIT statement to end the transaction
  • others, like PostgreSQL, run each statement in its own transaction until you explicitly start a transaction

The latter mode is called autocommit mode.

PostgreSQL and autocommit

PostgreSQL operates in autocommit mode, and there is no way to change that behavior on the server side.

Back in 2002, release 7.3 introduced a database parameter, autocommit, that allowed users to turn autocommit mode off. This proved to be a bad mistake: now the effects of SQL statements varied depending on that database setting, which led to all kinds of confusion and problems. Consequently, release 7.4 in 2003 removed that feature again.

Instead, PostgreSQL now implements autocommit mode on the client side. Almost all database clients and APIs offer a way to turn autocommit off. A few examples:

  • in psql, you can run "\set AUTOCOMMIT off"
  • with JDBC, you can call java.sql.Connection.setAutoCommit(boolean)
  • in psycopg2, you can call connection.set_session(autocommit=True)
  • with pgAdmin 4, you can click the "down" arrow next to the execute icon of pgAdmin icon in the query tool to turn off autocommit
  • with DBeaver, you can click the DBeaver autocommit icon icon in the SQL editor to disable autocommit

Note that in the above list, all clients except for psycopg2 run in autocommit mode by default.

The way that clients disable autocommit is by automatically sending a BEGIN statement before the first statement after the end of a transaction.

The appeal of disabling autocommit

If you are inside a database transaction, nothing much can go wrong: if you make a mistake, you can simply issue ROLLBACK and everything you did is undone. This is appealing because it allows you to recover from mistakes like running a DELETE statement without a WHERE clause.

Moreover, users who are used to database systems that do not operate in autocommit mode (for example, Oracle), like to disable autocommit to get the behavior that they are accustomed to.

This seems to be a safe and useful setting, so many people are surprised to learn that it is a really bad idea in PostgreSQL.

Problems with "autocommit off"

If you disable autocommit, and you don't explicitly commit or roll back, your database session will remain in the state "idle in transaction". Such long-running transactions cause two kinds of problems in the database:

Problem: locks in the database

PostgreSQL holds the locks that SQL statements take until the transaction ends. Now if you ran a data modifying statement after your last commit, some rows in your database are locked. Any other transaction that tries to modify such a row will hang until your transaction is ended.

To make things worse, each SQL statement causes a table-level lock on the tables it accesses: even a "harmless" SELECT will take an ACCESS SHARE on the tables it reads. This is to prevent concurrent data definition statements like DROP TABLE or ALTER TABLE from deleting the table or changing its definition while the SELECT is reading it.

As a consequence, a session with an open transaction can block data definition statements indefinitely long. In a DevOps environment, where new versions are deployed frequently, this will lead to serious disruptions. So even a simple SELECT from a client, with autocommit disabled, can do damage!

Problem: autovacuum maintenance

Whenever you update or delete a row in PostgreSQL, it retains the old version of the row to satisfy concurrent read requests. These old versions soon become obsolete and have to be removed. This is the job of the autovacuum daemon in PostgreSQL. If your database session has a long-running open transaction, it is possible that autovacuum cannot delete any row versions that are younger than the start of your transaction, because you might still need them.

The consequence is that autovacuum keeps running to get rid of those "dead tuples", but it cannot delete them and cannot make any progress. This leads to table bloat, where a table grows out of proportion while containing mostly junk data. Once you have a bloated table, you usually have to take downtime to run VACUUM (FULL) to fix the problem. There are tools like pg_squeeze to make this less painful.

How to defend against "autocommit off"

There are a couple of things that the database administrator can do to mitigate these problems:

  • Set the parameter idle_in_transaction_session_timeout to a value greater than 0. This will automatically cancel all transactions that are idle for longer than the specified time. The transaction is rolled back, and the client receives an error message.
  • Set the parameter old_snapshot_threshold to a value greater than -1. Then VACUUM will clean up dead tuples that have been dead for longer than that time, even if a long-running transaction might still need them. If the transaction tries to access a page that has been processed by such a VACUUM, it will receive an error message. Note that this parameter will not prevent idle transactions from holding locks!

While these parameters allow you to prevent the worst, they are "not nice", because they will lead to errors and rollbacks on the client side.

Conclusion

Nice as it may seem, resist the temptation to disable autovacuum in your interactive client. Get used to PostgreSQL's autocommit mode, otherwise you will endanger the health of your database. The administrator can set idle_in_transaction_session_timeout and old_snapshot_threshold to prevent such damage, at the price of causing errors on the client side.

The post Disabling autocommit in PostgreSQL can damage your health appeared first on Cybertec.

New article matched for in your Blogger! rule

Testing Wide Production 2020