Are you over 18 and want to see adult content?
More Annotations
![A complete backup of sefertorahseptante.blogspot.com](https://www.archivebay.com/archive2/b1d8a756-6db1-4315-a367-92758d1dfa33.png)
A complete backup of sefertorahseptante.blogspot.com
Are you over 18 and want to see adult content?
![A complete backup of gruppenferienhaus.com](https://www.archivebay.com/archive2/9a2c397c-33f9-47fb-8524-a1287795ae66.png)
A complete backup of gruppenferienhaus.com
Are you over 18 and want to see adult content?
![A complete backup of loganwalk.com.au](https://www.archivebay.com/archive2/a3241f2e-7873-4e74-8b77-9b9587b69918.png)
A complete backup of loganwalk.com.au
Are you over 18 and want to see adult content?
![A complete backup of thefuturephotographer.com](https://www.archivebay.com/archive2/cf9fe002-e0a0-4913-9b5a-35b117ffa2bd.png)
A complete backup of thefuturephotographer.com
Are you over 18 and want to see adult content?
![A complete backup of diy-waterproofing.co.uk](https://www.archivebay.com/archive2/38808b3e-4034-437e-9657-4ee66d167e7d.png)
A complete backup of diy-waterproofing.co.uk
Are you over 18 and want to see adult content?
![A complete backup of synapse-audio.com](https://www.archivebay.com/archive2/0a2c9b49-53c4-4ff1-89bc-227a71f141f6.png)
A complete backup of synapse-audio.com
Are you over 18 and want to see adult content?
![A complete backup of xn--kndigungsschreiben-m6b.de](https://www.archivebay.com/archive2/e677e733-a394-4c9f-a046-800a5628cf64.png)
A complete backup of xn--kndigungsschreiben-m6b.de
Are you over 18 and want to see adult content?
Favourite Annotations
![A complete backup of couragecardsforgirls.com](https://www.archivebay.com/archive2/86b14a2d-e16a-4f26-a7ca-3cafd7a70589.png)
A complete backup of couragecardsforgirls.com
Are you over 18 and want to see adult content?
![A complete backup of rareadvocates.org](https://www.archivebay.com/archive2/49466954-71d5-4eff-a469-428c5a1e652a.png)
A complete backup of rareadvocates.org
Are you over 18 and want to see adult content?
![A complete backup of cityofpetaluma.net](https://www.archivebay.com/archive2/0890594a-13a5-4c47-ba8a-e3541abc697d.png)
A complete backup of cityofpetaluma.net
Are you over 18 and want to see adult content?
![A complete backup of canadagoosejackets.com.co](https://www.archivebay.com/archive2/fe6f7e5a-23f2-4df1-b457-843aeea3f542.png)
A complete backup of canadagoosejackets.com.co
Are you over 18 and want to see adult content?
![A complete backup of hypnobirthing.com](https://www.archivebay.com/archive2/01db57f9-ccd8-4f3c-8bf5-c0a698a12e4a.png)
A complete backup of hypnobirthing.com
Are you over 18 and want to see adult content?
Text
STARTING WITH PG
This means that PostgreSQL handles writes to logs. To see where the logs really go we need to consult two options: log_directory – this is the directory that will store logs, it defaults to log, which means that it will log directory within data_directory. You can set it to value starting with / GETTING VALUE FROM DYNAMIC COLUMN IN PL/PGSQL TRIGGERS Now, let's see how to get the specific value. I will write a set of functions, each testing one approach, that will get the row and name of field, and will return value of the field. First approach – plain pl/PgSQL, using EXECUTE: = $ CREATE OR REPLACE FUNCTION get_dynamic_plpgsql (IN p_row test, IN p_column TEXT) RETURNS INT4 AS$$ DECLARE v
HOW TO EFFECTIVELY DUMP POSTGRESQL DATABASES Time for parallel load of dir dump was the same. So, finally – there is one BIG difference in favor of dir format – we can dump databases in parallel. For example: =$ time pg_dump -F d -j 8 -C -f dump-j8-dir depesz_explain real 0m24.928s user 2m30.755s sys 0m2.125s. 24 seconds is only 7 seconds more than plain text dump, but dump is smallerPG_STAT_FILE
I was faced with interesting problem. Which schema, in my DB, uses the most disk space? Theoretically it's trivial, we have set of helpful functions: pg_column_size. pg_database_size. pg_indexes_size. pg_relation_size. pg_table_size. pg_tablespace_size. WHICH TABLES SHOULD BE AUTO VACUUMED OR AUTO ANALYZED Rules are simple: run vacuum if n_dead_tup is larger than reltuples * autovacuum_vacuum_scale_factor + autovacuum_vacuum_threshold; run analyze if n_mod_since_analyze is larger than reltuples * autovacuum_analyze_scale_factor + autovacuum_analyze_threshold; Let's do some math. Assuming I have table with 10,000 rows, it willPARTITIONING
Partitioning is a method of splitting the large (based on number of records, not number of column) tables into many smaller ones. Preferably it should be done in a way that is transparent for application. One of the less used features of PostgreSQL is the fact that it's object -relational database. HOW TO SEND MAIL FROM DATABASE? Similar question has been asked many times on mailing lists and on IRC. Sometimes it's not mail sending, but file/directory creation, or something else that generally requires some interaction with “world outside of database". HOW TO INSERT DATA TO DATABASE best possible multi-row insert without transactions or prepared statements – 100 rows per statement. time used – 42.07 second! so, if you want to insert data as fast as possible – use copy (or better yet – pgbulkload ). if for whatever reason you can't use copy, then use multi-row inserts (new in 8.2!). then if you can, bundle them inWAITING FOR 9.1
Waiting for 9.1 – FOREACH IN ARRAY. On 16th of February, Tom Lane committed patch: Add FOREACH IN ARRAY looping to plpgsql. (I'm not entirely sure that we've finished bikeshedding the syntax details, but the functionality seems OK.) Pavel Stehule, reviewed by Stephen Frostand Tom Lane.
SELECT * FROM DEPESZ; In the first month (December of 2008) there were 391 plans added. Almost exactly 10 years later, in October 2018, we got 394 plans added, on average, each day. Lately the average daily count of new plans (monthly average) is 400-550. The best day was 21st of February 2019 where we got 5320 new plans.STARTING WITH PG
This means that PostgreSQL handles writes to logs. To see where the logs really go we need to consult two options: log_directory – this is the directory that will store logs, it defaults to log, which means that it will log directory within data_directory. You can set it to value starting with / GETTING VALUE FROM DYNAMIC COLUMN IN PL/PGSQL TRIGGERS Now, let's see how to get the specific value. I will write a set of functions, each testing one approach, that will get the row and name of field, and will return value of the field. First approach – plain pl/PgSQL, using EXECUTE: = $ CREATE OR REPLACE FUNCTION get_dynamic_plpgsql (IN p_row test, IN p_column TEXT) RETURNS INT4 AS$$ DECLARE v
HOW TO EFFECTIVELY DUMP POSTGRESQL DATABASES Time for parallel load of dir dump was the same. So, finally – there is one BIG difference in favor of dir format – we can dump databases in parallel. For example: =$ time pg_dump -F d -j 8 -C -f dump-j8-dir depesz_explain real 0m24.928s user 2m30.755s sys 0m2.125s. 24 seconds is only 7 seconds more than plain text dump, but dump is smallerPG_STAT_FILE
I was faced with interesting problem. Which schema, in my DB, uses the most disk space? Theoretically it's trivial, we have set of helpful functions: pg_column_size. pg_database_size. pg_indexes_size. pg_relation_size. pg_table_size. pg_tablespace_size. WHICH TABLES SHOULD BE AUTO VACUUMED OR AUTO ANALYZED Rules are simple: run vacuum if n_dead_tup is larger than reltuples * autovacuum_vacuum_scale_factor + autovacuum_vacuum_threshold; run analyze if n_mod_since_analyze is larger than reltuples * autovacuum_analyze_scale_factor + autovacuum_analyze_threshold; Let's do some math. Assuming I have table with 10,000 rows, it willPARTITIONING
Partitioning is a method of splitting the large (based on number of records, not number of column) tables into many smaller ones. Preferably it should be done in a way that is transparent for application. One of the less used features of PostgreSQL is the fact that it's object -relational database. HOW TO SEND MAIL FROM DATABASE? Similar question has been asked many times on mailing lists and on IRC. Sometimes it's not mail sending, but file/directory creation, or something else that generally requires some interaction with “world outside of database". HOW TO INSERT DATA TO DATABASE best possible multi-row insert without transactions or prepared statements – 100 rows per statement. time used – 42.07 second! so, if you want to insert data as fast as possible – use copy (or better yet – pgbulkload ). if for whatever reason you can't use copy, then use multi-row inserts (new in 8.2!). then if you can, bundle them inWAITING FOR 9.1
Waiting for 9.1 – FOREACH IN ARRAY. On 16th of February, Tom Lane committed patch: Add FOREACH IN ARRAY looping to plpgsql. (I'm not entirely sure that we've finished bikeshedding the syntax details, but the functionality seems OK.) Pavel Stehule, reviewed by Stephen Frostand Tom Lane.
MANY CHANGES ON EXPLAIN.DEPESZ.COM Some time ago Eugen Konkov mailed me that he'd like to have some changes on explain.depesz.com.. One of the changes was actual bug, but the rest were improvements toSTARTING WITH PG
Previously I wrote about locating config files.. The thing is – postgresql.conf is not the only place you can set your configuration in. In here, I'll describe all the places that can be used, why do we even have more than one place, and finally – how to find out where given value comes from. EXPLAINING THE UNEXPLAINABLE Generally it's so simple, that I shouldn't need describing it, but since I will use it in next examples, I decided to write a thing about it. Function Scan, is very simple node – it runs a function that returns recordset – that is, it will not run function like “lower()", but a function that returns (at least potentially) multiple rows, or multiple columns.PARTITIONING
Partitioning is a method of splitting the large (based on number of records, not number of column) tables into many smaller ones. Preferably it should be done in a way that is transparent for application. One of the less used features of PostgreSQL is the fact that it's object -relational database.STUPID TRICKS
Neat for tables with few rows and a couple of columns you want to exclude, which sadly is a bit to specific for me to have any use for me. I have a 200K row table with a column with ~20K of binary in each row in it that I often want to leave out when peeking at it. “ERROR: OPERATOR DOES NOT EXIST: INTEGER = TEXT” HOW TO First answer and the best way to solve the problem is: fix the code. Check which value should be cast to which, and add proper casts. Like this: SELECT * FROM tablea a JOIN tableb b ON a. intfield = b. textfield; into: SELECT * FROM tablea a JOIN tableb b ON a. intfield = b. textfield::int4; It requires change of code, or even datastructure.
HOW TO INSERT DATA TO DATABASE best possible multi-row insert without transactions or prepared statements – 100 rows per statement. time used – 42.07 second! so, if you want to insert data as fast as possible – use copy (or better yet – pgbulkload ). if for whatever reason you can't use copy, then use multi-row inserts (new in 8.2!). then if you can, bundle them in HOW TO SEND MAIL FROM DATABASE? Similar question has been asked many times on mailing lists and on IRC. Sometimes it's not mail sending, but file/directory creation, or something else that generally requires some interaction with “world outside of database".WAITING FOR 9.1
Waiting for 9.1 – FOREACH IN ARRAY. On 16th of February, Tom Lane committed patch: Add FOREACH IN ARRAY looping to plpgsql. (I'm not entirely sure that we've finished bikeshedding the syntax details, but the functionality seems OK.) Pavel Stehule, reviewed by Stephen Frostand Tom Lane.
HOW TO GET SHORTEST CONNECTION BETWEEN TWO CITIES 22:28 < rafasc> i am trying to use plpgsql to find the shortest path between two cities, each pair of cities has one or more edges, each edge has a different wheight. 22:28 < rafasc> Is there a easy way to compute the shortest path between two cities? SELECT * FROM DEPESZ; In the first month (December of 2008) there were 391 plans added. Almost exactly 10 years later, in October 2018, we got 394 plans added, on average, each day. Lately the average daily count of new plans (monthly average) is 400-550. The best day was 21st of February 2019 where we got 5320 new plans. GETTING VALUE FROM DYNAMIC COLUMN IN PL/PGSQL TRIGGERS Now, let's see how to get the specific value. I will write a set of functions, each testing one approach, that will get the row and name of field, and will return value of the field. First approach – plain pl/PgSQL, using EXECUTE: = $ CREATE OR REPLACE FUNCTION get_dynamic_plpgsql (IN p_row test, IN p_column TEXT) RETURNS INT4 AS$$ DECLARE v
HOW TO EFFECTIVELY DUMP POSTGRESQL DATABASES Time for parallel load of dir dump was the same. So, finally – there is one BIG difference in favor of dir format – we can dump databases in parallel. For example: =$ time pg_dump -F d -j 8 -C -f dump-j8-dir depesz_explain real 0m24.928s user 2m30.755s sys 0m2.125s. 24 seconds is only 7 seconds more than plain text dump, but dump is smallerSTARTING WITH PG
This means that PostgreSQL handles writes to logs. To see where the logs really go we need to consult two options: log_directory – this is the directory that will store logs, it defaults to log, which means that it will log directory within data_directory. You can set it to value starting with /PG_STAT_FILE
I was faced with interesting problem. Which schema, in my DB, uses the most disk space? Theoretically it's trivial, we have set of helpful functions: pg_column_size. pg_database_size. pg_indexes_size. pg_relation_size. pg_table_size. pg_tablespace_size. VARIABLES IN SQL, WHAT, HOW, WHEN? HOW TO SETUP SSL CONNECTIONS AND Of course – you might want to opt to use passwordless ssl key, so that you can start/restart server without being prompted for passwords. To do it, it's enough to: =$ openssl rsa -in pg-server.key -out pg-server.new.key =$ mv pg-server.new.key pg-server.key. WHICH TABLES SHOULD BE AUTO VACUUMED OR AUTO ANALYZED Rules are simple: run vacuum if n_dead_tup is larger than reltuples * autovacuum_vacuum_scale_factor + autovacuum_vacuum_threshold; run analyze if n_mod_since_analyze is larger than reltuples * autovacuum_analyze_scale_factor + autovacuum_analyze_threshold; Let's do some math. Assuming I have table with 10,000 rows, it will HOW TO SEND MAIL FROM DATABASE? Similar question has been asked many times on mailing lists and on IRC. Sometimes it's not mail sending, but file/directory creation, or something else that generally requires some interaction with “world outside of database".STUPID TRICKS
Dynamic updates of fields in NEW in PL/pgSQL. Today, on #postgresql on IRC, strk asked about updating fields in NEW record, in plpgsql, but where name of the field is in variable. SELECT * FROM DEPESZ; In the first month (December of 2008) there were 391 plans added. Almost exactly 10 years later, in October 2018, we got 394 plans added, on average, each day. Lately the average daily count of new plans (monthly average) is 400-550. The best day was 21st of February 2019 where we got 5320 new plans. GETTING VALUE FROM DYNAMIC COLUMN IN PL/PGSQL TRIGGERS Now, let's see how to get the specific value. I will write a set of functions, each testing one approach, that will get the row and name of field, and will return value of the field. First approach – plain pl/PgSQL, using EXECUTE: = $ CREATE OR REPLACE FUNCTION get_dynamic_plpgsql (IN p_row test, IN p_column TEXT) RETURNS INT4 AS$$ DECLARE v
HOW TO EFFECTIVELY DUMP POSTGRESQL DATABASES Time for parallel load of dir dump was the same. So, finally – there is one BIG difference in favor of dir format – we can dump databases in parallel. For example: =$ time pg_dump -F d -j 8 -C -f dump-j8-dir depesz_explain real 0m24.928s user 2m30.755s sys 0m2.125s. 24 seconds is only 7 seconds more than plain text dump, but dump is smallerSTARTING WITH PG
This means that PostgreSQL handles writes to logs. To see where the logs really go we need to consult two options: log_directory – this is the directory that will store logs, it defaults to log, which means that it will log directory within data_directory. You can set it to value starting with /PG_STAT_FILE
I was faced with interesting problem. Which schema, in my DB, uses the most disk space? Theoretically it's trivial, we have set of helpful functions: pg_column_size. pg_database_size. pg_indexes_size. pg_relation_size. pg_table_size. pg_tablespace_size. VARIABLES IN SQL, WHAT, HOW, WHEN? HOW TO SETUP SSL CONNECTIONS AND Of course – you might want to opt to use passwordless ssl key, so that you can start/restart server without being prompted for passwords. To do it, it's enough to: =$ openssl rsa -in pg-server.key -out pg-server.new.key =$ mv pg-server.new.key pg-server.key. WHICH TABLES SHOULD BE AUTO VACUUMED OR AUTO ANALYZED Rules are simple: run vacuum if n_dead_tup is larger than reltuples * autovacuum_vacuum_scale_factor + autovacuum_vacuum_threshold; run analyze if n_mod_since_analyze is larger than reltuples * autovacuum_analyze_scale_factor + autovacuum_analyze_threshold; Let's do some math. Assuming I have table with 10,000 rows, it will HOW TO SEND MAIL FROM DATABASE? Similar question has been asked many times on mailing lists and on IRC. Sometimes it's not mail sending, but file/directory creation, or something else that generally requires some interaction with “world outside of database".STUPID TRICKS
Dynamic updates of fields in NEW in PL/pgSQL. Today, on #postgresql on IRC, strk asked about updating fields in NEW record, in plpgsql, but where name of the field is in variable. VARIABLES IN SQL, WHAT, HOW, WHEN? To do so, you need to use some kind of prefix (class). In PostgreSQL before 9.2, you had to configure the class in postgresql.conf, using: custom_variable_classes = depesz. So you can use “depesz" class. In 9.2 and later classes are defined on use. Usage of these variables is EXPLAINING THE UNEXPLAINABLE This is the simplest possible operation – PostgreSQL opens table file, and reads rows, one by one, returning them to user or to upper node in explain tree, for example to limit, as in: WHY I’M NOT FAN OF UUID DATATYPE Recently, on irc, there were couple of cases where someone wanted to use uuid as datatype for their primary key.. I opposed, and tried to explain, but IRC doesn't really allow for longer texts, so figured I'll write a blogpost. First problem – UUID values are completelyopaque.
SQL – SELECT * FROM DEPESZ; SQL clause. In our datetime parsing engine we currently support it with SSSS name. This commit adds SSSSS as an alias for SSSS. Alias is added in favor of upcoming jsonpath .datetime () method. But it's also supported in to_date ()/ to_timestamp () as positive side effect. WHY IS UPSERT SO COMPLICATED? If you worked with certain other (than PostgreSQL) open source database, you might wonder why PostgreSQL doesn't have MERGE, and why UPSERT example in documentation is so complicated.. Well, let's try to answer the question, and look into some alternatives. First, of WHAT IS THE OVERHEAD OF LOGGING? With logging set, and another 45 minutes of tests, I got these results: 1 worker: 149.323508 tps (latency: 6.688188 ms) 4 workers: 159.188091 tps (latency: 25.117503 ms) 8 workers: 164.343266 tps (latency: 48.665579 ms) One more thing is that syslog might drop some log lines, or so I've heard.PARTITIONING
Partitioning is a method of splitting the large (based on number of records, not number of column) tables into many smaller ones. Preferably it should be done in a way that is transparent for application. One of the less used features of PostgreSQL is the fact that it's object -relational database. HOW TO SETUP SSL CONNECTIONS AND --- /etc/ssl/openssl.cnf 2015-03-19 16:14:06.000000000 +0100 +++ openssl.cnf 2015-05-11 21:15:30.089342711 +0200 @@ -126,17 +126,18 @@ countryName = Country Name (2 letter code) -countryName_default = AU +countryName_default = PL countryName_min = 2 countryName_max = 2 stateOrProvinceName = State or Province Name(full name)
WAITING FOR 9.1
Waiting for 9.1 – FOREACH IN ARRAY. On 16th of February, Tom Lane committed patch: Add FOREACH IN ARRAY looping to plpgsql. (I'm not entirely sure that we've finished bikeshedding the syntax details, but the functionality seems OK.) Pavel Stehule, reviewed by Stephen Frostand Tom Lane.
WAITING FOR 9.4
Waiting for 9.4 – Support ordered-set (WITHIN GROUP) aggregates. On 23rd of December, Tom Lane committed patch: Support ordered-set (WITHIN GROUP) aggregates. This patch introduces generic support for ordered-set and hypothetical-set aggregate functions, as well as implementations of the instances defined in SQL:2008 (percentile_contSkip to content
=$
select * from depesz;| Menu* Why upgrade PG?
* explain.D.C
* paste.D.C
* Waiting for … expand childmenu
* … PostgreSQL 14
* … PostgreSQL 13
* … PostgreSQL 12
* … PostgreSQL 11
* … PostgreSQL 10
* … PostgreSQL 9.6 * … PostgreSQL 9.5 * … PostgreSQL 9.4 * … PostgreSQL 9.3 * … PostgreSQL 9.2 * … PostgreSQL 9.1 * … PostgreSQL 9.0 * … PostgreSQL 8.5 * … PostgreSQL 8.4* Projects
* Contact
MILLION EXPLAIN PLANS… Back in 2007 I wrote a simple script to add total time to explain analyze output. It was very helpful, for me. Then, around a year later figured that it could be useful for others, so wrote a simple site that got plans, and displayed them with extra info. It didn't look great.
Two years later I figured it would be good to make it look nicer. Asked a friend – Łukasz Lewandowski about it, and together we made new version, that was easieron eyes.
Since then there were no layout changes, just some new functionality: deleting plans, anonymizing/obfuscating them, user accounts, planstats.
The site seemed to catch. In the first month (December of 2008) there were 391 plans added. Almost exactly 10 years later, in October 2018, we got 394 plans added, on average, each day. Lately the average daily count of new plans (monthly average) is400-550.
The best day was 21st of February 2019 where we got 5320 new plans. Most likely due to link to site being posted on some news aggregatoror forum.
And, just yesterday, at around 4:30pm UTC, there was millionth planpasted.
That is amazing and I would like to thank all of you – it really brightens my day when I see that people are using the site, and it (hopefully) helps them. Posted on 2021-05-182021-05-18|Tags
explain , explain.depesz.com, postgresql
|5 Comments on Million explainplans…
GETTING VALUE FROM DYNAMIC COLUMN IN PL/PGSQL TRIGGERS? Every so often, on irc , someone asks how to get value from column that is passed as argument. This is generally seen as not possible, as pl/PgSQL doesn't have support for dynamic column names. We can work around it, though. Are the workarounds usable, in terms ofperformance?
Continue reading Getting value from dynamic column in pl/PgSQLtriggers?
Posted on 2021-04-21|Tags
benchmark , dynamic
, hstore
, json
, jsonb
, performace
, plperl
, plpgsql
, postgresql
, trigger
|Leave a comment on Getting value from dynamic column in pl/PgSQL triggers? CHANGES ON EXPLAIN.DEPESZ.COM – EXTRACTED QUERY FROM AUTO-EXPLAINPLANS
Some time ago James Courtney reported missing functionality. Specifically, when one uses auto-explain, logged explains contain query text. So, when such explain is then pasted on explain.depesz.com , it stands to reason that it should be able to extract the query on its own, without having to manually extract it and put it in query box. It took me a while, but finally, got it working today. And you can see it in all four explain format:* JSON
* TEXT
* XML
* YAML
Also, while I'm writing – it seems that somewhere next month, there will be 1 millionth plan uploaded to the site Hope you all findit useful
Posted on 2021-04-14|Tags
auto_explain , explain , explain.depesz.com, json
, postgresql
, query
, text
, xml
, yaml
|Leave a comment on Changes on explain.depesz.com – extracted query from auto-explain plans WAITING FOR POSTGRESQL 14 – ADD UNISTR FUNCTION On 29th of March 2021, Peter Eisentraut committedpatch:
Add unistr function This allows decoding a string with Unicode escape sequences. It is similar to Unicode escape strings, but offers some more flexibility. Author: Pavel Stehule|Tags
emoji , pg14
, postgresql
, python
, unicode
, unistr
, waiting
|2 Comments on Waiting for PostgreSQL 14 – Add unistr function WAITING FOR POSTGRESQL 14 – ADD “PG_DATABASE_OWNER” DEFAULTROLE.
On 26th of March 2021, Noah Misch committedpatch:
Add "pg_database_owner" default role. Membership consists, implicitly, of the current database owner. Expect use in template databases. Once pg_database_owner has rights within a template, each owner of a database instantiated from that template will exercise those rights. Reviewed by John Naylor. Discussion: https://postgr.es/m/20201228043148.GA1053024@rfd.leadboat.com Continue reading Waiting for PostgreSQL 14 – Add “pg_database_owner" default role. Posted on 2021-04-012021-03-31|Tags
administration , pg14, pg_database_owner
, postgresql
, privileges
, waiting
|Leave a comment on Waiting for PostgreSQL 14 – Add “pg_database_owner” default role. WAITING FOR POSTGRESQL 14 – ADD DATE_BIN FUNCTION On 24th of March 2021, Peter Eisentraut committedpatch:
Add date_bin function Similar to date_trunc, but allows binning by an arbitrary interval rather than just full units. Author: John Naylor|Tags
date_bin , date_trunc, pg14
, postgresql
, waiting
|7 Comments on Waiting for PostgreSQL 14 – Add date_bin function WAITING FOR POSTGRESQL 14 – ALLOW CONFIGURABLE LZ4 TOASTCOMPRESSION.
On 19th of March 2021, Robert Haas committedpatch:
Allow configurable LZ4 TOAST compression. There is now a per-column COMPRESSION option which can be set to pglz (the default, and the only option in up until now) or lz4. Or, if you like, you can set the new default_toast_compression GUC to lz4, and then that will be the default for new table columns for which no value is specified. We don't have lz4 support in the PostgreSQL code, so to use lz4 compression, PostgreSQL must be built --with-lz4. In general, TOAST compression means compression of individual column values, not the whole tuple, and those values can either be compressed inline within the tuple or compressed and then stored externally in the TOAST table, so those properties also apply to this feature. Prior to this commit, a TOAST pointer has two unused bits as part of the va_extsize field, and a compessed datum has two unused bits as part of the va_rawsize field. These bits are unused because the length of a varlena is limited to 1GB; we now use them to indicate the compression type that was used. This means we only have bit space for 2 more built-in compresison types, but we could work around that problem, if necessary, by introducing a new vartag_external value for any further types we end up wanting to add. Hopefully, it won't be too important to offer a wide selection of algorithms here, since each one we add not only takes more coding but also adds a build dependency for every packager. Nevertheless, it seems worth doing at least this much, because LZ4 gets better compression than PGLZ with less CPU usage. It's possible for LZ4-compressed datums to leak into composite type values stored on disk, just as it is for PGLZ. It's also possible for LZ4-compressed attributes to be copied into a different table via SQL commands such as CREATE TABLE AS or INSERT .. SELECT. It would be expensive to force such values to be decompressed, so PostgreSQL has never done so. For the same reasons, we also don't force recompression of already-compressed values even if the target table prefers a different compression method than was used for the source data. These architectural decisions are perhaps arguable but revisiting them is well beyond the scope of what seemed possible to do as part of this project. However, it's relatively cheap to recompress as part of VACUUM FULL or CLUSTER, so this commit adjusts those commands to do so, if the configured compression method of the table happens not to match what was used for some column value stored therein. Dilip Kumar. The original patches on which this work was based were written by Ildus Kurbangaliev, and those were patches were based on even earlier work by Nikita Glukhov, but the design has since changed very substantially, since allow a potentially large number of compression methods that could be added and dropped on a running system proved too problematic given some of the architectural issues mentioned above; the choice of which specific compression method to add first is now different; and a lot of the code has been heavily refactored. More recently, Justin Przyby helped quite a bit with testing and reviewing and this version also includes some code contributions from him. Other design input and review from Tomas Vondra, Álvaro Herrera, Andres Freund, Oleg Bartunov, AlexanderKorotkov, and me.
Discussion: http://postgr.es/m/20170907194236.4cefce96%40wp.localdomain Discussion: http://postgr.es/m/CAFiTN-uUpX3ck%3DK0mLEk-G_kUQY%3DSNOTeqdaNRR9FMdQrHKebw%40mail.gmail.com Continue reading Waiting for PostgreSQL 14 – Allow configurable LZ4TOAST compression.
Posted on 2021-03-222021-03-24|Tags
compression , lz4
, performance
, pg14
, pglz
, postgresql
, toast
, waiting
|5 Comments on Waiting for PostgreSQL 14 – Allow configurable LZ4 TOAST compression. STARTING WITH PG – WHERE ARE LOGS? Just so that it will be perfectly clear: the logs I have in mind are the ones for DBAs to read – with slow queries, errors, and other interesting information. So, how does one find them? Continue reading Starting with Pg – where are logs? Posted on 2021-03-052021-03-18|Tags
beginner , configuration, guc
, howto
, log
, logs
, postgresql
, starting
, tutorial
|1 Comment on Starting with Pg– where are logs?
FIXED DISPLAY OF BACKWARD SCANS ON EXPLAIN.DEPESZ.COM Yaroslav Schekin (ysch) reported on ircthat Index Scans
Backward do not display properly. After checking I found out that if explain is in JSON/YAML/XML – node type is changed to “Index Scan" (or “Index Only Scan" if it was originally “Index Only Scan Backward"). Continue reading Fixed display of Backward scans on explain.depesz.com Posted on 2021-03-03|Tags
backward , explain
, explain.depesz.com, perl
, postgresql
|Leave a comment on Fixed display of Backward scans on explain.depesz.com STARTING WITH PG – WHERE/HOW CAN I SET CONFIGURATION PARAMETERS? Previously I wrote about locating config files.
The thing is – postgresql.conf is not the only place you can set your configuration in. In here, I'll describe all the places that can be used, why do we even have more than one place, and finally – how to find out where givenvalue comes from.
Continue reading Starting with Pg – where/how can I set configuration parameters? Posted on 2021-03-01|Tags
beginner , configuration, guc
, howto
, postgresql
, starting
, tutorial
|1 Comment on Starting with Pg – where/how can I set configuration parameters?POSTS NAVIGATION
Page 1 Page 2 … Page 154Next page
SEARCH
Search for: Search
FOLLOW ME
* Comments RSS
* Posts RSS
POPULAR POSTS
* Explaining the unexplainable – part 3 290 views | 0 comments * Explaining the unexplainable – part 2 204 views | 0 comments * CHAR(x) vs. VARCHAR(x) vs. VARCHAR vs. TEXT – UPDATED 2010-03-03 186 views | 0 comments * “ERROR: operator does not exist: integer = text” how to fixit?
110 views | 0 comments * Explaining the unexplainable104
views | 0 comments
* explain.depesz.com.101 views | 0
comments
* Why is UPSERT so complicated?98
views | 0 comments
* Waiting for PostgreSQL 11 – Fast ALTER TABLE ADD COLUMN with anon-NULL default
97 views | 0 comments * How much RAM is PostgreSQL using? 95 views | 0 comments * how to insert data to database – as fast as possible 86 views | 0 commentsPOSTGRESQL
* Documentation
* Explain Analyze analyzer* IRC help channel
* Mailing Lists search* PG Planet
* PostgreSQL Home PageABOUT ME
* CPAN
* GitLab
* Linked In
* Why upgrade PG?
* explain.D.C
* paste.D.C
* Waiting for … expand childmenu
* … PostgreSQL 14
* … PostgreSQL 13
* … PostgreSQL 12
* … PostgreSQL 11
* … PostgreSQL 10
* … PostgreSQL 9.6 * … PostgreSQL 9.5 * … PostgreSQL 9.4 * … PostgreSQL 9.3 * … PostgreSQL 9.2 * … PostgreSQL 9.1 * … PostgreSQL 9.0 * … PostgreSQL 8.5 * … PostgreSQL 8.4* Projects
* Contact
Details
Copyright © 2024 ArchiveBay.com. All rights reserved. Terms of Use | Privacy Policy | DMCA | 2021 | Feedback | Advertising | RSS 2.0