前言
今天一位同事反馈说:"改长度的时候,varchar类型由小改大,发现十几GB的数据量,要跑二十分钟。并且是停了业务跑的,库里没有锁也没有其他事务在运行,最后删除字段上的索引之后,秒级完成"。光看结论像是修改表结构导致了索引重写。关于表结构之前我曾经分享过一篇文章,里面提到数据类型由小改大表不会重写,那么索引也不会重写。看个栗子 👇🏻
postgres=# create table test2(id int,info varchar(5));
CREATE TABLE
postgres=# create index on test2(info);
CREATE INDEX
postgres=# select pg_relation_filepath('test2_info_idx');
pg_relation_filepath
----------------------
base/18381/18814
(1 row)
postgres=# alter table test2 alter column info type varchar(6); ---小改大
ALTER TABLE
postgres=# select pg_relation_filepath('test2_info_idx'); ---索引未重写
pg_relation_filepath
----------------------
base/18381/18814
(1 row)
那么为什么同事这个例子会这么慢,难道之前的分析有误?
分区表
先回顾一下之前的结论
- varchar(x) to varchar(y) when y>=x. It works too if going from varchar(x) to varchar or text (no size limitation)
- numeric(x,z) to numeric(y,z) when y>=x, or to numeric without specifier
- varbit(x) to varbit(y) when y>=x, or to varbit without specifier
- timestamp(x) to timestamp(y) when y>=x or timestamp without specifier
- timestamptz(x) to timestamptz(y) when y>=x or timestamptz without specifier
- interval(x) to interval(y) when y>=x or interval without specifier
- timestamp to text、varchar、varchar(n),char(n),需要重写
- timestamp(x) to text、varchar、varchar(n)、char(n),n>=x,需要重写
- text to char、char(x)、varchar(n),需要重写
- text to varchar,不需要重写
- numeric(x) to numeric(y),y>=x,不需要重写
- numeric(x) to numeric,不需要重写
- numeric(x,y) to numeric,不需要重写
- alter table xx add column xx serial/bigserial,需要重写
简而言之,就是字段长度或者是精度由小变大,可以不需要重写。这个案例唯一有点差异的是这个表是分区表,尝试复现一下
postgres=# \d+ t1
Partitioned table "public.t1"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+----------------------+-----------+----------+---------+----------+-------------+--------------+-------------
id | integer | | | | plain | | |
info | character varying(5) | | | | extended | | |
num | integer | | | | plain | | |
name | character varying(5) | | | | extended | | |
Partition key: RANGE (num)
Partitions: t1_part1 FOR VALUES FROM (0) TO (10000000),
t1_part2 FOR VALUES FROM (10000000) TO (20000000),
t1_part3 FOR VALUES FROM (20000000) TO (30000000),
t1_part4 FOR VALUES FROM (30000000) TO (40000000),
t1_part5 FOR VALUES FROM (40000000) TO (50000000)
postgres=# select count(*) from t1;
count
----------
40000000
(1 row)
由于分区键无法修改数据类型,所以针对非分区键进行测试。
postgres=# create index on t1(info);
CREATE INDEX
postgres=# \d t1_part1
Table "public.t1_part1"
Column | Type | Collation | Nullable | Default
--------+----------------------+-----------+----------+---------
id | integer | | |
info | character varying(5) | | |
num | integer | | |
name | character varying(5) | | |
Partition of: t1 FOR VALUES FROM (0) TO (10000000)
Indexes:
"t1_part1_info_idx" btree (info)
修改非索引列
postgres=# select pg_relation_filepath('t1_part1_info_idx');
pg_relation_filepath
----------------------
base/18381/18860
(1 row)
postgres=# alter table t1 alter column name type varchar(6); ---非索引列
ALTER TABLE
postgres=# select pg_relation_filepath('t1_part1_info_idx');
pg_relation_filepath
----------------------
base/18381/18860
(1 row)
可以看到修改非索引的类型,由小改大,表是不会重写的,索引自然也没有发生重写。但是是由大改小,这个则遵循我们之前说的规则,表一样需要重写,索引自然也需要重写。
可以看到数据库会依次重建各个子表和子表对应的索引 👇🏻
postgres=# alter table t1 alter column name type varchar(5);
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: rewriting table "t1_part1" ---重写第一个子表
DEBUG: creating and filling new WAL file
DEBUG: done creating and filling new WAL file
DEBUG: building index "t1_part1_info_idx" on table "t1_part1" with request for 1 parallel workers ---重写索引
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147670/1/9
DEBUG: index "t1_part1_info_idx" can safely use deduplication
DEBUG: drop auto-cascades to type pg_temp_18764
DEBUG: drop auto-cascades to type pg_temp_18764[]
DEBUG: rewriting table "t1_part2" ---重写第二个子表
DEBUG: building index "t1_part2_info_idx" on table "t1_part2" with request for 1 parallel workers ---重写索引
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147670/1/16
DEBUG: index "t1_part2_info_idx" can safely use deduplication
DEBUG: drop auto-cascades to type pg_temp_18768
DEBUG: drop auto-cascades to type pg_temp_18768[]
DEBUG: rewriting table "t1_part3"
DEBUG: creating and filling new WAL file
DEBUG: done creating and filling new WAL file
...
DEBUG: compacted fsync request queue from 32768 entries to 1 entries
DEBUG: building index "t1_part3_info_idx" on table "t1_part3" with request for 1 parallel workers
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147670/1/23
DEBUG: index "t1_part3_info_idx" can safely use deduplication
DEBUG: creating and filling new WAL file
DEBUG: done creating and filling new WAL file
DEBUG: creating and filling new WAL file
DEBUG: done creating and filling new WAL file
DEBUG: drop auto-cascades to type pg_temp_18771
DEBUG: drop auto-cascades to type pg_temp_18771[]
DEBUG: rewriting table "t1_part4"
DEBUG: creating and filling new WAL file
DEBUG: done creating and filling new WAL file
DEBUG: creating and filling new WAL file
...
DEBUG: building index "t1_part4_info_idx" on table "t1_part4" with request for 1 parallel workers
DEBUG: qsort_tuple
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147670/1/30
DEBUG: index "t1_part4_info_idx" can safely use deduplication
DEBUG: drop auto-cascades to type pg_temp_18777
DEBUG: drop auto-cascades to type pg_temp_18777[]
DEBUG: rewriting table "t1_part5"
DEBUG: building index "t1_part5_info_idx" on table "t1_part5" with request for 1 parallel workers
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147670/1/37
DEBUG: index "t1_part5_info_idx" can safely use deduplication
DEBUG: drop auto-cascades to type pg_temp_18780
DEBUG: drop auto-cascades to type pg_temp_18780[]
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 147670/1/41
ALTER TABLE
修改索引列
再看看修改索引列的情况,由小改大
postgres=# select pg_relation_filepath('t1_part1_info_idx');
pg_relation_filepath
----------------------
base/18381/18907
(1 row)
postgres=# select pg_relation_filepath('t1_part1');
pg_relation_filepath
----------------------
base/18381/18885
(1 row)
postgres=# alter table t1 alter column info type varchar(8); ---耗时很久
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: drop auto-cascades to index t1_part1_info_idx
DEBUG: drop auto-cascades to index t1_part2_info_idx
DEBUG: drop auto-cascades to index t1_part3_info_idx
DEBUG: drop auto-cascades to index t1_part4_info_idx
DEBUG: drop auto-cascades to index t1_part5_info_idx
DEBUG: building index "t1_part1_info_idx" on table "t1_part1" with request for 1 parallel workers
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147674/1/15
DEBUG: index "t1_part1_info_idx" can safely use deduplication
DEBUG: building index "t1_part2_info_idx" on table "t1_part2" with request for 1 parallel workers
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147674/1/17
DEBUG: index "t1_part2_info_idx" can safely use deduplication
DEBUG: building index "t1_part3_info_idx" on table "t1_part3" with request for 1 parallel workers
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147674/1/19
DEBUG: index "t1_part3_info_idx" can safely use deduplication
DEBUG: building index "t1_part4_info_idx" on table "t1_part4" with request for 1 parallel workers
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147674/1/21
DEBUG: index "t1_part4_info_idx" can safely use deduplication
DEBUG: building index "t1_part5_info_idx" on table "t1_part5" with request for 1 parallel workers
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147674/1/23
DEBUG: index "t1_part5_info_idx" can safely use deduplication
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 147674/1/24
ALTER TABLE
postgres=# select pg_relation_filepath('t1_part1_info_idx');
pg_relation_filepath
----------------------
base/18381/18913
(1 row)
postgres=# select pg_relation_filepath('t1_part1');
pg_relation_filepath
----------------------
base/18381/18885
(1 row)
此处和我们之前的结论有点出入,索引发生了重写,而表没有发生重写!堆栈也证实了确实在创建索引
[postgres@xiongcc ~]$ pstack 15782
#0 mergeonerun (state=0x29d6ad8) at tuplesort.c:3206
#1 mergeruns (state=state@entry=0x29d6ad8) at tuplesort.c:3154
#2 0x000000000091419f in tuplesort_performsort (state=0x29d6ad8) at tuplesort.c:2271
#3 0x00000000004f2318 in _bt_parallel_scan_and_sort (btspool=btspool@entry=0x2991810, btspool2=0x0, btshared=0x7fe1bb8952c0, sharedsort=<optimized out>, sharedsort2=0x0, sortmem=<optimized out>, progress=progress@entry=true) at nbtsort.c:1991
#4 0x00000000004f3065 in _bt_leader_participate_as_worker (buildstate=<optimized out>, buildstate=<optimized out>) at nbtsort.c:1781
#5 _bt_begin_parallel (request=<optimized out>, isconcurrent=<optimized out>, buildstate=0x7ffe753b9730) at nbtsort.c:1652
#6 _bt_spools_heapscan (indexInfo=0x29c1b60, buildstate=0x7ffe753b9730, index=0x2ad12e8, heap=0x2ace998) at nbtsort.c:398
#7 btbuild (heap=0x2ace998, index=0x2ad12e8, indexInfo=0x29c1b60) at nbtsort.c:329
#8 0x000000000054a973 in index_build (heapRelation=heapRelation@entry=0x2ace998, indexRelation=indexRelation@entry=0x2ad12e8, indexInfo=indexInfo@entry=0x29c1b60, isreindex=isreindex@entry=false, parallel=parallel@entry=true) at index.c:3018
#9 0x000000000054bc24 in index_create (heapRelation=heapRelation@entry=0x2ace998, indexRelationName=indexRelationName@entry=0x29c0f38 "t1_part4_info_idx", indexRelationId=18801, indexRelationId@entry=0, parentIndexRelid=parentIndexRelid@entry=18797, parentConstraintId=parentConstraintId@entry=0, relFileNode=<optimized out>, indexInfo=indexInfo@entry=0x29c1b60, indexColNames=indexColNames@entry=0x29c25b8, accessMethodObjectId=accessMethodObjectId@entry=403, tableSpaceId=tableSpaceId@entry=0, collationObjectId=collationObjectId@entry=0x29c26a8, classObjectId=classObjectId@entry=0x29c26c0, coloptions=coloptions@entry=0x29c26d8, reloptions=reloptions@entry=0, flags=flags@entry=0, constr_flags=0, allow_system_table_mods=false, is_internal=true, constraintId=constraintId@entry=0x7ffe753b9cd4) at index.c:1252
#10 0x00000000005ec60c in DefineIndex (relationId=relationId@entry=18777, stmt=stmt@entry=0x29c2238, indexRelationId=indexRelationId@entry=0, parentIndexId=parentIndexId@entry=18797, parentConstraintId=0, is_alter_table=is_alter_table@entry=true, check_rights=check_rights@entry=false, check_not_in_use=check_not_in_use@entry=false, skip_build=skip_build@entry=false, quiet=quiet@entry=true) at indexcmds.c:1138
#11 0x00000000005ecab5 in DefineIndex (relationId=<optimized out>, stmt=stmt@entry=0x2ac4b28, indexRelationId=18797, indexRelationId@entry=0, parentIndexId=parentIndexId@entry=0, parentConstraintId=parentConstraintId@entry=0, is_alter_table=is_alter_table@entry=true, check_rights=false, check_not_in_use=check_not_in_use@entry=false, skip_build=false, quiet=quiet@entry=true) at indexcmds.c:1399
#12 0x00000000006070ef in ATExecAddIndex (stmt=0x2ac4b28, is_rebuild=is_rebuild@entry=true, lockmode=8, rel=0x7fe1bb725bb8, tab=0x2a56148) at tablecmds.c:8604
#13 0x0000000000619466 in ATExecCmd (wqueue=wqueue@entry=0x7ffe753ba208, tab=tab@entry=0x2a56148, cmd=0x2ac61c0, lockmode=lockmode@entry=8, cur_pass=cur_pass@entry=2, context=context@entry=0x7ffe753ba4b0) at tablecmds.c:4987
#14 0x000000000061b6a5 in ATRewriteCatalogs (context=<optimized out>, lockmode=<optimized out>, wqueue=0x7ffe753ba208) at tablecmds.c:4859
#15 ATController (parsetree=parsetree@entry=0x289ad10, rel=<optimized out>, cmds=<optimized out>, recurse=<optimized out>, lockmode=lockmode@entry=8, context=context@entry=0x7ffe753ba4b0) at tablecmds.c:4437
#16 0x000000000061c504 in AlterTable (stmt=stmt@entry=0x289ad10, lockmode=lockmode@entry=8, context=context@entry=0x7ffe753ba4b0) at tablecmds.c:4083
#17 0x00000000007ba6aa in ProcessUtilitySlow (pstate=pstate@entry=0x2ac96b8, pstmt=pstmt@entry=0x289b030, queryString=queryString@entry=0x289a018 "alter table t1 alter column info type varchar(8);", context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, qc=qc@entry=0x7ffe753baac0, dest=0x289b110) at utility.c:1325
#18 0x00000000007b8c61 in standard_ProcessUtility (pstmt=pstmt@entry=0x289b030, queryString=queryString@entry=0x289a018 "alter table t1 alter column info type varchar(8);", readOnlyTree=<optimized out>, context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x289b110, qc=qc@entry=0x7ffe753baac0) at utility.c:1074
#19 0x00007fe1b402dfe9 in pgss_ProcessUtility (pstmt=0x289b030, queryString=0x289a018 "alter table t1 alter column info type varchar(8);", readOnlyTree=<optimized out>, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x289b110, qc=0x7ffe753baac0) at pg_stat_statements.c:1143
至于索引列由大改小,则如我们之前了解的那样,表和索引都会发生重写,这里就不做演示了。
普通表
再让我们试一下非分区表的情况,表结构和数据量都保持一致
postgres=# \d t2
Table "public.t2"
Column | Type | Collation | Nullable | Default
--------+----------------------+-----------+----------+---------
id | integer | | |
info | character varying(5) | | |
num | integer | | |
name | character varying(5) | | |
Indexes:
"t2_info_idx" btree (info)
修改非索引列
非索引列小改大不会重写,而大改小都会发生重写
postgres=# select pg_relation_filepath('t2');
pg_relation_filepath
----------------------
base/18381/18865
(1 row)
postgres=# select pg_relation_filepath('t2_info_idx');
pg_relation_filepath
----------------------
base/18381/18868
(1 row)
postgres=# set client_min_messages to debug5;
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
SET
postgres=# alter table t2 alter column name type varchar(6); ---未重写
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 147677/1/1
ALTER TABLE
postgres=# alter table t2 alter column name type varchar(5); ---重写
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: rewriting table "t2" ---重写表
DEBUG: creating and filling new WAL file
DEBUG: done creating and filling new WAL file
...
DEBUG: building index "t2_info_idx" on table "t2" with request for 1 parallel workers ---重写索引
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147678/1/4
DEBUG: index "t2_info_idx" can safely use deduplication
DEBUG: creating and filling new WAL file
DEBUG: done creating and filling new WAL file
...
DEBUG: drop auto-cascades to type pg_temp_18865
DEBUG: drop auto-cascades to type pg_temp_18865[]
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 147678/1/8
ALTER TABLE
postgres=# \d t1
Partitioned table "public.t1"
Column | Type | Collation | Nullable | Default
--------+----------------------+-----------+----------+---------
id | integer | | |
info | character varying(5) | | |
num | integer | | |
name | character varying(7) | | |
Partition key: RANGE (num)
Indexes:
"t1_info_idx" btree (info)
Number of partitions: 5 (Use \d+ to list them.)
postgres=# select pg_relation_filepath('t2');
pg_relation_filepath
----------------------
base/18381/18944
(1 row)
postgres=# select pg_relation_filepath('t2_info_idx');
pg_relation_filepath
----------------------
base/18381/18947
(1 row)
修改索引列
修改索引列的结论一致,小改大不会重写,大改小都会重写
postgres=# alter table t2 alter column name type varchar(5);
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: rewriting table "t2" ---重写表
...
DEBUG: compacted fsync request queue from 32768 entries to 2 entries
DEBUG: creating and filling new WAL file
DEBUG: done creating and filling new WAL file
...
DEBUG: compacted fsync request queue from 32768 entries to 2 entries
DEBUG: creating and filling new WAL file
DEBUG: done creating and filling new WAL file
...
DEBUG: compacted fsync request queue from 32768 entries to 2 entries
DEBUG: building index "t2_info_idx" on table "t2" with request for 1 parallel workers ---重写索引
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGRESS, xid/subid/cid: 0/1/0
DEBUG: qsort_tuple
...
DEBUG: CommitTransaction(1) name: unnamed; blockState: PARALLEL_INPROGRESS; state: INPROGRESS, xid/subid/cid: 147682/1/4
DEBUG: index "t2_info_idx" can safely use deduplication
DEBUG: creating and filling new WAL file
DEBUG: done creating and filling new WAL file
DEBUG: creating and filling new WAL file
DEBUG: done creating and filling new WAL file
DEBUG: drop auto-cascades to type pg_temp_18865
DEBUG: drop auto-cascades to type pg_temp_18865[]
DEBUG: CommitTransaction(1) name: unnamed; blockState: STARTED; state: INPROGRESS, xid/subid/cid: 147682/1/8
ALTER TABLE
workaround ?
有同事提议将分区表detach,改了类型再attach回去不就行了吗?但是原生分区表要求表结构必须一致
postgres=# alter table t1 attach partition t1_part1 for values from (0) to (10000000);
ERROR: child table "t1_part1" has different type for column "info"
直接修改子表类型也不行
postgres=# alter table t1_part2 alter column info type varchar(8);
ERROR: cannot alter inherited column "info"
所以要么所有的子表全部改,要么全部不改,全部改了再attach回去何不直接改了就是,反而绕了一圈。
目前能想到的办法就是先删除索引,这样可以避免长时间的AccessExclusiveLock,最后再通过给子表使用CIC的方式创建回去。
小结
小结一下,表结构变更规则对于分区表有些许变化
- 修改非索引列,遵循之前说的规则,即字段长度由小改大,不会发生重写,假如由大改小,整个分区表所有子表和索引都会发生重写
- 修改索引列,有所出入,字段长度由小改大,虽然所有子表不会重写,但是所有索引会重写!由大改小规则不变,整个分区表和索引都会发生重写
属实有点费解,分区表修改字段长度还会导致索引重写,索引越多阻塞时间越长。
分区表目前的确有较多差异和限制,比如分区表还不支持CIC
postgres=# create index concurrently myidx on t1(id);
ERROR: cannot create index on partitioned table "t1" concurrently
因此,之前的规则需要完善一下了,字段由小改大不会发生重写不包括分区表。另外再补充一下
postgres=# create table t3(id numeric(5,3));
CREATE TABLE
postgres=# select pg_relation_filepath('t3');
pg_relation_filepath
----------------------
base/18381/18956
(1 row)
postgres=# alter table t3 alter column id type numeric(6,3);
ALTER TABLE
postgres=# select pg_relation_filepath('t3');
pg_relation_filepath
----------------------
base/18381/18956
(1 row)
postgres=# alter table t3 alter column id type numeric(7,4);
ALTER TABLE
postgres=# select pg_relation_filepath('t3');
pg_relation_filepath
----------------------
base/18381/18959
(1 row)
postgres=# alter table t3 alter column id type numeric(7,5);
ALTER TABLE
postgres=# select pg_relation_filepath('t3');
pg_relation_filepath
----------------------
base/18381/18962
(1 row)
numeric(x,z) to numeric(y,z) when y>=x, or to numeric without specifier,当标度发生变化时(小数点后面的位数),不管精度有没有变化表都需要重写。