欢迎投稿

今日深度:

Tera Blog,

Tera Blog,


Teradata SQL调优 1.优化过程:依照运行时间,数据量和复杂度来定位瓶颈。查看sql执行计划,判断其合理性。 性能监控 ==》目标选取 ==》性能分析 ==》过程优化 ==》运行跟踪(性能监控) 注意:每个过程中都会产生必须的文档 2.性能分析: • Review PDM --表定义 --PI的选择 --表的记录数与空间占用 • Review SQL --关联的表 --逻辑处理复杂度 --整体逻辑 --多余的处理 • 测试运行 --响应时间 • 查看EXPLAIN --瓶颈定位 3.过程优化: • 业务规则理解 --合理选取数据访问路径 • PDM设计 --调整PDM • SQL写法不优化,忽略了Teradata的机理与特性 --调整SQL • Teradata优化器未得到足够的统计信息 --Collect Statistics 4.Multiple Insert/select --> Multi-Statement Insert/Select * 并行插入空表不记录Transient Journal * 充分利用Teradata向空表Insert较快以及并行操作的特性如: • 现状 INSERT INTO ${TARGETDB}.DES (Party_Id ,Party_Name ... ) SELECT … FROM SRC1 ; INSERT INTO ${TARGETDB}.DES (Party_Id ,Party_Name ... ) SELECT … FROM SRC2 ; INSERT INTO ${TARGETDB}.DES (Party_Id ,Party_Name ... ) SELECT … FROM SRC3 ; 说明:串行执行,多个Transaction • 优化后: INSERT INTO ${TARGETDB}.DES (Party_Id ,Party_Name ... ) SELECT … FROM SRC1 ;INSERT INTO ${TARGETDB}.DES (Party_Id ,Party_Name ... ) SELECT … FROM SRC2 ;INSERT INTO ${TARGETDB}.DES (Party_Id ,Party_Name ... ) SELECT … FROM SRC3 ; 说明:并行执行,单个Transaction 5.Insert/Select with Union/Union all --> Multi-Statement Insert/Select * Union 需要排除重复记录,Union all虽不需要排重,但都需要占用大量的Spool空间,都需要进行重新组织数据 如:现状: INSERT INTO ${TARGETDB}.DES (Party_Id ,Party_Name ... ) SELECT … FROM SRC1 ; UNION ALL SELECT … FROM SRC2 ; UNION ALL SELECT … FROM SRC3 ; … 调整后: INSERT INTO ${TARGETDB}.DES (Party_Id ,Party_Name ... ) SELECT … FROM SRC1 ;INSERT INTO ${TARGETDB}.T01_DES (Party_Id ,Party_Name ... ) SELECT … FROM SRC2 ;INSERT INTO ${TARGETDB}.T01_DES (Party_Id ,Party_Name ... ) SELECT … FROM SRC3 ; 6.排除重复记录 * 针对单表内的重复记录使用ROW_ NUMBER函数排重 * 排重方式多了一层子查询 * 增加了大量的数据重新分布的时间 现状: …… INSERT INTO ${TARGETDB}.T01_INDIV (Party_Id ,Party_Name ... ) SELECT COALESCE(b1.Party_Id,'-1') , COALESCE(TRIM(b1.Party_name),'') ... FROM ( select party_id party_name, … , ROW_NUMBER() OVER (PARTITION BY Party_Id ORDER BY Party_Name ) as rownum from ${TEMPDB}.T01_INDIV b1 … ) AA where AA.rownum = 1 …… 建议做法: INSERT INTO ${TEMPDB}.T01_INDIV … INSERT INTO ${TEMPDB}.T01_INDIV … …… INSERT INTO ${TARGETDB}.T01_INDIV (Party_Id ,Party_Name ... ) SELECT party_id party_name, … From ${TEMPDB}.T01_INDIV b1 Qualify ROW_NUMBER() OVER (PARTITION BY Party_Id ORDER BY Party_Name ) = 1 • 运用Qualify + ROW_ NUMBER函数 • SQL语句简洁明了 • 避免子查询 优化前explain: …… 4) We do an all-AMPs STAT FUNCTION step from PTEMP.VT_T01_INDIV_cur by way of an all-rows scan with no residual conditions into Spool 5 (Last Use), which is assumed to be redistributed by value to all AMPs. The result rows are put into Spool 3 (all_amps), which is built locally on the AMPs. 5) We do an all-AMPs RETRIEVE step from Spool 3 (Last Use) by way of an all-rows scan into Spool 1 (all_amps), which is built locally on the AMPs. The result spool file will not be cached in memory. The size of Spool 1 is estimated with no confidence to be 6,781,130 rows. The estimated time for this step is 16.01 seconds. 6) We do an all-AMPs RETRIEVE step from Spool 1 (Last Use) by way of an all-rows scan with a condition of ("ROWNUMBER = 1") into Spool 8 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 8 by row hash. The result spool file will not be cached in memory. The size of Spool 8 is estimated with no confidence to be 6,781,130 rows. The estimated time for this step is 1 minute. 7) We do an all-AMPs MERGE into PDATA.T01_INDIV from Spool 8 (Last Use). 优化后explain: …… 4) We do an all-AMPs STAT FUNCTION step from PTEMP.VT_T01_INDIV_cur by way of an all-rows scan with no residual conditions into Spool 5 (Last Use), which is assumed to be redistributed by value to all AMPs. The result rows are put into Spool 3 (all_amps), which is built locally on the AMPs. 5) We do an all-AMPs RETRIEVE step from Spool 3 (Last Use) by way of an all-rows scan with a condition of ("Field_10 = 1") into Spool 1 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 1 by row hash. The result spool file will not be cached in memory. The size of Spool 1 is estimated with no confidence to be 6,781,130 rows. The estimated time for this step is 1 minute. 6) We do an all-AMPs MERGE into PDATA.T01_INDIV from Spool 1 (Last Use). BTEQ中不能用length函数 LENGTH()不是Teradata 的标准函数,但是Teradata SQL Assitant支持它。 今天在bteq中用此函数,报错,搞半天,终于知道原因之所在。 具体操作如下: SELECT CASE WHEN ETL_JOB LIKE 'CHK_%' THEN SUBSTR(TRIM(etl_job),5,LENGTH(TRIM(etl_job))-4) ELSE ETL_JOB END etl_job ,CAST('200811'||'01' AS DATE format 'YYYYMMDD') ,Last_JobStatus ,max(Last_Endtime) ,JobType FROM PV_AUTO.ETL_JOB WHERE etl_job in (……) group by 1,2,3,5 ; 报错信息如下: THEN SUBSTR(TRIM(etl_job),5,LENGTH(TRIM(etl_job))-4) $ *** Failure 3706 Syntax error: expected something between '(' and the 'TRIM' keyword. Statement# 1, Info =450 思维惯性,以为sql中存在不可见字符,遂重写报错部分,不见效。突然想起,TERADATA的标准求字符串长度的函数为CHAR,CHARS,CHARACTER等,将 THEN SUBSTR(TRIM(etl_job),5,LENGTH(TRIM(etl_job))-4)改成THEN SUBSTR(TRIM(etl_job),5,CHARS(TRIM(etl_job))-4),问题解决。 Teradata数据压缩 1 数据压缩的好处 1. Teradata数据压缩可以节省存储空间,从而让相同的存储单元存储更多的业务数据; 2. 由于查询时需要检索的数据量相对要少,可以减少I/O,并且缓存中可以存储更多的压缩过的逻辑行,从而改善数据库性能; 3. Teradata使用的压缩算法非常有效,因为压缩原因,获取数据需要更少的的磁盘访问,算法将节省的CPU时间用来来执行压缩操作; 4. 可以将压缩节省出来的空间创建高级索引; 2.teradata数据压缩 Teradata版本v2r5采用无损压缩算法,允许在一列上对多达255个数据值进行压缩,数据压缩不会造成信息的丢失。 一般数据压缩的粒度可以分为数据行和数据块。Teradata数据压缩的粒度是数据行,这是日常数据操作的粒度,数据行压缩可以独立的针对列进行,数据块压缩不行,并且数据块压缩都会造成额外的压缩/解压开销,会降低数据库性能。Teradata可以直接访问压缩后的数据行—访问数据行时它不需要对数据做重构或者解压操作。 前面说过,Teradata可以独立的对行中的某列做压缩操作。如果某列允许有空值,空值也允许被压缩。最好的压缩候选项为此列中出现频率最高的值,压缩后的值存放在表头。每行前面有1 bit数据来标识此行有没有被压缩。非主索引的定长列都可以是Teradata压缩的候选项。如下的数据类型可以被压缩,括号中为该数据类型的长度(限制): - Integer Date (4) - CHAR (N, where N < 256) - BYTEINT (1) - SMALLINT (2) - INTEGER (4) - FLOAT/REAL (8) - DOUBLE (8) - DECIMAL (1, 2, 4 or 8) - BYTE (N, where N < 256) 如果某列中有高频率出现的值,那么该列可以有很高的压缩比,常见的有下面一些情形: - NULLs - Zeros - Default values - Flags - Spaces - Binary indicators (e.g., T/F) 在经分系统中,一些产品字段,如Brand_Id,prd_id等,都会有很高的压缩比。 Teradata压缩对应用程序,ETL,查询等操作是完全透明的,并且其操作相当方便,用户只需要在表定义的时候加上压缩信息就可以了。例如,下面是数据压缩的语法: CREATE TABLE Properties ( Address VARCHAR(40), City CHAR(20) COMPRESS (‘WuHan’), StateCode CHAR(2) ); 当记录中出现“WuHan”的频率很高时,数据压缩将非常有效。 3. **系统数据压缩操作方法 基于以上数据压缩原理,我们开发了脚本create_comp_ddl.pl,用来依照压缩规则生成包含压缩信息的建表DDL. 创建数据库表脚本文件用户配置文件名称 输出脚本目标文件 压缩值数目 起始压缩表大小 压缩值占比(百分数); 对参数的解释如下: 创建数据库表脚本文件,老表ddl名称(必须将ddl导出放在指定目录); 用户配置文件名称,配置信息主要是除varchar,pi,ppi以外的不希望被压缩的列,各列逗号分开 将要生成的新表ddl文件名称; 30,压缩值数目,表示取多少个压缩来做压缩操作; 100000000,压缩表大小阀值,当表大小小于此阈值时,不进行压缩; 20,当前所取压缩记录占表空间比例的阀值,当欲压缩记录占表空间比例小于此阈值时不做压缩处理; 操作步骤: 1)、获取压缩前的建表ddl,保存为src_table.ddl,文件名将作为参数; 2)、运行create_comp_ddl.pl脚本, 如:perl create_comp_ddl.plc sr_table.ddl config.txt target_table.sql 30 100000000 30 运行结果将生成新的建表ddl(即target_table.sql); 3)、以新的ddl建表,将老表的数据导入新表,确认无误后,删除老表; 4. 批量压缩 若要压缩的表较多,也可以采取批量压缩的方式。这里采用批处理执行perl脚本的方法来批量压缩。如需要相继对prd_prd_inst_hist表和pty_cust_hist表做压缩,创建批处理compression.bat,其内容如下: perl create_comp_ddl.plc src_table_prd.ddl config.txt tar_table_prd.sql 30 100000000 30>1.log perl create_comp_ddl.pl src_table_pty.ddl config.txt tar_table_pty.sql 30 100000000 30>2.log 注意,上述参数里面的源表ddl名和目标ddl名不能相同,否则第二次的结果可能覆盖第一次结果. 常用sql使用技巧 LIKE用法 select first_name,last_name from employee where last_name like all('%E%','%S%'); select first_name,last_name from employee where last_name like any/some('%E%','%S%'); Teradata缺省不区分大小写,如果要区分,可以使用其扩展参数CASESPECIFIC select first_name,last_name from employee where last_name(CASESPECIFIC) like '%Ra%'; 通配符作为一般字符使用 例: LIKE "%A%%AAA__" ESCAPE"A" 在这个表达式中,讲字母A定义为ESCAPE字符,其中: 第一个%为通配符: 第一个A和其后的%联合表示字符% 第三个%为通配符 第二个A和其后的A联合表示字符A; 第四个和其后的'_'联合表示字符_ 最后一个'_'为通配符 NULL的使用 当进行升序排列时,NULL在数字列排列在负数前,在字符排列在空格前 优先级和括号 逻辑运算符的缺省优先级顺序从高到低为:NOT.AND.OR NOT NOT既可以否定操作符,也可以否定条件表达式 否定操作符: select first_name,last_name,employee_number from employee where department_number not =301; 否定条件: SELECT first_name,last_name,employee_number FROM employee WHERE NOT(department_number=301); 字符型数据 在ANSI标准中关于字符型数据定义了两类:CHAE和VARCHAR,Teradata除了上述两类基本字符数据外,还扩展了LONG VARCHAR类型,它等同于VARCHAR(64000),是最长的字符串 char(size)固定长度的字符串最大长度:64000字节 varchar(size) char varying(size) character varying(size) 这三个是可变长度字符串,最大长度:64000字节 long varchar 等同于varchar(64000) 二进制数据 二进制数据类型是Teradata的扩展,ANSI标准没有此类型 BYTE(size) 固定长度的二进制串默认值:(1)最大值:64000字节 varbyte(size) 可变长的二进制串默认值:(1)最大值:64000字节 数字型数据 在ANSI标准中关于数字型数据定义了四类:SMALLINT.INTEGER.FLOAT.DECIMAL,Teradata还扩展了BYTEINT和DATE smallint 整数 范围:-32768~32767 integer 整数 范围:-2147483648~2147483647 decimal(size,dec) 小数 最大:18位 numeric(precision,dec) decimal的同义词 float 表示浮点数 float[(precision)] 通float real 同float double precision 双精度浮点数 byteint 有符号整数 范围-128~127 date 特殊整数,格式为YYMMDD或yyymmdd表示日期 图形数据 graphic[(n)] 固定长度的图形字符串默认长度:1 vargraphic(n) 可变长的图形字符串 long vargraphic 可变长的图形字符串 算术运算符 **(求幂) mod(取模) 系统变量 date当前系统日期 time系统时间 user当前登陆的用户 database当前缺省的数据库 字符常量.数字常量和计算模式 字符文字在ANSI方式下区分大小写,而在Teradata缺省模式下不区分大小写 数字型常量最多可以包含15个数字,数字前面的零是无意义的 计算模式是指在SQL的SELECT语句中直接进行数学计算,如下: select 2*2593; 日期计算 例:1997年3月31日的表达方式 year=(1997-1900)*10000=970000 month=(3*100)=300 day=31 date=970331 于日期有关的数据函数 1.EXTRACT ANSI标准中EXTRACT函数允许选取日期和时间中任意段或任意间隔的值,Teradata中EXTRACT函数支持日期数据中选取年.月.日.从时间数据中选取小时.分钟和秒 SELECT DATE; 96/11/07 SELECT EXTRACT(YEAR FROM DATE); 1996 SELECT EXTRACT(MONTH FROM DATE+30); 12 SLEECT EXTRACT(DAY FROM DATE+2); 09 SELECT TIME; 14:52:32 SELECT EXTRACT(HOUR FROM TIME); 14 SELECT EXTRACT(SECOND FROM TIME+30); INVALID TIME 2.ADD_MONTHS 利用CAST作数据转换 ANSI标准中利用CAST函数将一种数据类型转换成另一种数据类型 select cast(salary_amount as integer) from employee; select cast(salary_amount as dec(6,0)) from employee; select cast(last_name as char(5)) from employee where department_number=401; Teradata也可以利用CAST函数来完成上面的操作,另外,它也作了扩充.为了完成上面相同的操作,也可以使用下面的表达方式 select salary_amount(integer); select salary_amount(dec(6,0)); select last_name(char(5)); Teradata对CAST函数本身也作了扩展,比如为了将显示结果以大写表示,可以使用下面的SQL语句 select cast(last_name as char(5) uppercase) from employee where department_number=401; 简单的宏 宏(Macro)的基本特征是: 可以包含一条或多条SQL语句 可以包含多个BETQ语句 可以包含注解 存储在数据字典中 宏的定义 create macro birthday_list as (select last_name,first_name,birthdate from employee where department_number=201 order by birthdate; ); 宏的执行 exec birthday_list; 宏的删除 drop macro birthday_list; 宏的显示和改变 show macro birthday_list 使用replace macro命令可以改变宏的定义 子查询 基本子查询 复杂子查询 在子查询中可以使用一些限制符,如下所示: =any 等于 in not=all 等于 not in =some 等于 in EXISTS在子查询中的使用 EXISTS可以使用在自查询中,用来表示查询至少返回一行.如果前面加上否定词not,则表示查询时无记录存在.exists可以代替in,而not exists可以代替not in select 'YES' where exists(select department_number from department where department_number not in(select department_number from employee)); select true where exists(select * from employee where department_number=600); 关于子查询的一些基本规则 子查询必须用括号括起来 子查询可以是in或not in字句的操作目标 也可以是exists或not exists字句的操作目标 支持限定词all.any.some 支持like或not like 子查询中可以指定匹配多个字段 子查询结果均为唯一值,即自动去除重复记录,相当于自动加上distinct关键词 order by不能用于子查询内 子查询最多可以指定64个表或视图 Fastload 1. 简介: Teradata 作为数据仓库行业的老大,其对数据并行处理能力令人钦佩,而Fastload 工具填充数据的速度绝对可以让任何人惊讶。本文就Fastload 工具的使用作一介绍,希望能帮助读者快速掌握这个工具的使用。 Fastload 支持批处理的脚本编写方式,也支持交互式的方式。其功能就是从数据文件中把大批数据快速插入Teradata 数据库。在数据仓库建设阶段对ETL是非常重要的手段,平均比其他第三方ETL 工具快3 倍以上。 另外Fastload也提供了错误-恢复执行功能,能够继续先前由于各种原因停下来没有完成的工作。 Fastload 能够从主机、专线、或者一般的TCP/IP 连接的计算机节点上运行。 使用限制:必须是空表,并且该表上没有外健,也没有除了UPI 或者NUPI 以外的索引。还有要注意的一点, 就是即使要上载的表是MULITISET(允许重复纪录)的,FASTLOAD 也不会将重复的纪录装入。 2. Fastload 运行过程介绍 共有2 个步骤,数据装载和数据排序过程 装载过程:接收从数据源文件传来的大量数据,并且按照HASH 算法把数据进行分布,分布到对应的AMP 里面去。 数据记录被写入没有排序的数据块中。 数据排序:把装载步骤生成的大量数据块进行排序操作,并且把数据块写入磁盘。 3. 支持的文件格式 在使用FASTLOAD 时候,数据源文件有很严格的规则。 FASTLOAD 支持5 种文件格式 a) DATA 文件:使用FASTEXPORT 或者BTEQ 生成的数据文件。(用.export data file=… 命令生成)。 b) INDICDATA 文件: 和第一种的区别在于文件包含了NULL 的信息(用.export indicdata file=…命令生成)。 c) VARTEXT 文件:变长的纪录字段,每个字段之间用某个特定的字符分隔。 d) 无格式文件:需要指定某个字段的起始位置和长度,还要指定换行符的长度 (需要注意的是UNIX 里面用一个字节长度表示换行,WINDOWS 里面是2 个) 例如文件格式为: +470000000542+0010+470000000659+Jan 01 2003+ +470000000543+0011+470000000660+Jan 11 2003+ +470000000544+0012+470000000661+Jan 23 2003+ 就必须这样定义字段(WINDOWS 环境) define SERV_ID (char(12)), deliml(char(1)), CUST_ID (char(4)), delim2(char(1)), CONTRACT (char(12)), delim3(char(1)), C_DATE (char(11)), delim4(char(1)), newlinechar(char(2)) e)二进制文件:用得比较少。 4. Fastload 的.Begin Loading 命令介绍 .begin loading dbname.tablename errorfiles dname.e1,dname.e2,checkpoint 1000 indicators; dbname 表示要装入的数据库名称 tablename 表示要装载的表的名称(必须已经存在) e1 和e2 可以随意指定(不能和数据库中已经存在的对象重名)用来记录当FASTLOAD 不成功时候的错误信息, 对应错误的数据包等信息,并且能够利用这2 个表进行一旦出现上传友问题时候,不用全部重新开始上传的恢复机制。 Checkpoint 1000 表示每1000 条记录检查一次,在第一步骤暂停情况下,可以不用重新从第一条记录开始,而从最 后一次检查点处继续装载过程。 Indicators 表示当数据文件里面有表示空子段的标示时,可以保留空子段。前提是要上传的文件时使用fastexport 或者 bteq 的 .import indicdata file=…;方式生成的数据文件。 5. Fastload 的“错误恢复执行”功能 a)如果数据在装载过程出错(数据库空间不够或者用户中止程序的执行等),在排除错误后可以重新执行FASTLOAD 脚本, 如果指定了CHECKPOINT 参数,数据将在通过CHECKPOINT 点处开始继续装载。如果没有指定CHECKPOINT,系统将不知道从 哪里开始是准确的,从第一条开始重新装载。 b)多次装载,这个功能非常有用,比如要装载的表的内容在2 个文件中,就应该采用该功能。如果在一个脚本中执行了 BEGIN LOADING 但是没有ENDLOADING 语句,并且没有任何错误。装载将停下来,等待用户选择装载下个一个文件或者执 行END LOADING 语句。 c)如果程序在第二阶段出错,只要简单的执行BEGIN LOAGING 和END LOADING命令即可,这样将重新第二阶段的运行。 6. Fastload 交互方式常用命令 HELP;可以看到FASTLOAD 的全部语法 SHOW;可以查看由DEFINE 命令定义的字段格式 HELP TABLE TNAME;比较有用,可以不用对字段进行逐一定义,省略DEFINE语句。 CLEAR;清除上面所有DEFINE 的定义 另外DEFINE 的内容可以叠加,可以定义若干字段之后,HELP TABLE 这时候SHOW之后就会发现定义的内容是 DEFINE命令和HELP TABLE命令的全部内容。 7. Sessions 命令说明 sessions 指定了同时连接TERADATA 服务器的进程数目, sessions 命令的语法是 sessions max min; 限制条件是最多可以指定与AMP 个数相同个session,如果大于AMP 个数,将按照AMP 个数进行处理。 Sessions *;表示将用系统的AMP 个数进行操作。 题外话:session 命令在不同的工具中有不同的写法,多个session 不一定好,对fastload 来说,session 多一些很有好处,但是对于需要进行全表扫描的操作多个session 仍然会造成session 的顺序执行,同时浪费资 源,降低系统的性能。例如bteq 里面如果需要在没有索引的字段上进行对某条记录的查询,就不要用多个session. 8. Fastload 脚本例子和说明(其中数据源文件的格式是变长纪录的文本文件) sessions 4; errlimit 25; /*登陆服务器,要修改hosts 文件格式为增加一行 teracop1 10.10.10.10 */ logon tera/qin,qin; /*创建要插入的空表*/ create table qin.mb_nbr ( SERV_ID varchar(12), CUST_ID smallint, CONTRACT varchar(12), C_DATE date )unique primary index(SERV_ID); /*设置源文件的格式为变长的字符串,并且每个字段用’:’分隔*/ set record vartext ":" ; /*定义要传入的表的字段格式和数据文件名称*/ define SERV_ID (varchar(12)), CUST_ID (smallint), CONTRACT (varchar(12)), C_DATE (date ,NULLIF=0) ) file=e:MB_SERV_ACC_NBR; /*显示上面的定义*/ show; /*开始装载数据*/ begin loading mb_nbr errorfiles qin.e1, qin.e2 checkpoint 1000; insert into qin.mb_nbr( :SERV_ID, :CUST_ID, :CONTRACT); /*结束装载并退出*/ end loading; logoff; 9. 执行的片断,从中可以清楚地看到FASTLOAD 的2 个阶段。 C:_fastload>fastload fl_out.txt ( <表示输入脚本, >表示 输入屏幕内容) **** 16:57:45 Number of recs/msg: 232 **** 16:57:45 Starting to send to RDBMS with record 1 **** 16:57:51 Starting row 100000 **** 16:57:59 Starting row 200000 **** 16:58:07 Starting row 300000 **** 16:58:14 Starting row 400000 **** 16:58:22 Starting row 500000 **** 16:58:32 Starting row 600000 **** 16:58:43 Starting row 700000 **** 16:58:48 Sending row 747882 **** 16:58:48 Finished sending rows to the RDBMS ============================================================== = End Loading Phase = ============================================================== 0010 end loading; **** 17:00:55 END LOADING COMPLETE Total Records Read = 747882 Total Error Table 1 = 0 ---- Table has been dropped Total Error Table 2 = 0 ---- Table has been dropped Total Inserts Applied = 747882 Total Duplicate Rows = 0 Start: Thu Sep 04 16:58:50 2003 End : Thu Sep 04 17:00:55 2003 0011 logoff; ============================================================== = Logoff/Disconnect ============================================================== **** 17:01:05 Logging off all sessions **** 17:01:05 Total processor time used = '11.1761 Seconds' . Start : Thu Sep 04 16:57:40 2003 . End : Thu Sep 04 17:01:05 2003 . Highest return code encountered = '0'. **** 17:01:05 FDL4818 FastLoad Terminated fastload(2) FASTLOAD的特性: 1)FASTLOAD脚本包含三个主要模块: 建表模块、Define模块、Insert模块。 2)FASTLOAD分为两个阶段: 第1阶段 - 从数据文本中读取记录,只需放到目标AMP上即可 第2阶段 - 在AMP内部按Rowhash进行排序 3)FASTLOAD的目标表初始的时候只能是个空表。 4)FASTLOAD的目标表不能定义次索引,但可以定义分区字段。 5)FASTLOAD一次只能对一张表进行加载,不支持对多张表进行加载。 6)FASTLOAD支持多个数据文件一次性加入一张表中,但这些数据文件是串行加载并非并行加载。 7)FASTLOAD在加载的时候会自动剔除文本中的重复记录,即使目标表定义为multiset table。 典型的fastload脚本示例如下: /* 限制错误记录条数*/ ERRLIMIT 1; /* 数据库登录语句*/ LOGON ETL_TD_STAGE,ETL_TD_STAGE; /* 建表模块*/ DATABASE TD_STAGE; DROP TABLE AUCTION_PROPERTY; DROP TABLE PLOG.AUCTION_PROPERTY_E1; DROP TABLE PLOG.AUCTION_PROPERTY_E2; CREATE MULTISET TABLE AUCTION_PROPERTY ,NO FALLBACK , NO BEFORE JOURNAL, NO AFTER JOURNAL, CHECKSUM = DEFAULT ( AUCTION_ID CHAR(32) CHARACTER SET LATIN CASESPECIFIC, PROPERTY_ID DECIMAL(11,0), PROP_VID INTEGER, STATUS INTEGER, GMT_CREATE TIMESTAMP(0), GMT_MODIFIED TIMESTAMP(0), INS_DATE TIMESTAMP(0)) PRIMARY INDEX ( AUCTION_ID ); /* 方式一:设置加载方式为变长,分割符为Tab键*/ /* 同时,进入加载第1阶段*/ .SET RECORD VARTEXT " " NOSTOP BEGIN LOADING AUCTION_PROPERTY ERRORFILES PLOG.AUCTION_PROPERTY_E1,PLOG.AUCTION_PROPERTY_E2; /* DEFINE模块,定义数据文本格式和存放路径*/ DEFINE AUCTION_ID (VARCHAR(42),nullif='') ,PROPERTY_ID (VARCHAR(21),nullif='') ,PROP_VID (VARCHAR(20),nullif='') ,STATUS (VARCHAR(20),nullif='') ,GMT_CREATE (VARCHAR(25),nullif='') ,GMT_MODIFIED (VARCHAR(25),nullif='') ,INS_DATE (VARCHAR(25),nullif='') FILE=/bak/Teradata/auction_property.txt; /* 方式二:设置加载方式为定长,分割符为Tab键*/ DEFINE AUCTION_ID (VARCHAR(42),nullif='') ,TAB1 (CHAR(1)) ,PROPERTY_ID (VARCHAR(21),nullif='') ,TAB2 (CHAR(1)) ,PROP_VID (VARCHAR(20),nullif='') ,TAB3 (CHAR(1)) ,STATUS (VARCHAR(20),nullif='') ,TAB4 (CHAR(1)) ,GMT_CREATE (VARCHAR(25),nullif='') ,TAB5 (CHAR(1)) ,GMT_MODIFIED (VARCHAR(25),nullif='') ,TAB6 (CHAR(1)) ,INS_DATE (VARCHAR(25),nullif='') ,NEWLINE (CHAR(1)) FILE=/bak/Teradata/auction_property.txt; /* 设置加载起点,也可以设置加载终点(THRU)*/ RECORD 1; /* INSERT模块,定义物理表对应的格式*/ INSERT INTO TD_STAGE.AUCTION_PROPERTY( AUCTION_ID ,PROPERTY_ID ,PROP_VID ,STATUS ,GMT_CREATE ,GMT_MODIFIED ,INS_DATE ) VALUES ( :AUCTION_ID ,:PROPERTY_ID ,:PROP_VID ,:STATUS ,:GMT_CREATE (format 'yyyymmddhhmiss') ,:GMT_MODIFIED (format 'yyyymmddhhmiss') ,:INS_DATE (format 'yyyymmddhhmiss') ); /* 第1阶段结束,进入加载第2阶段*/ END LOADING; /* 数据库退出语句*/ LOGOFF; 典型的fastload日志示例如下: /* 第1阶段,文本记录无序录入到AMP上*/ **** 19:44:59 Number of recs/msg: 3 **** 19:44:59 Starting to send to RDBMS with record 1 **** 19:45:08 Starting row 100000 **** 19:45:18 Starting row 200000 **** 19:45:28 Starting row 300000 **** 19:45:38 Starting row 400000 **** 19:45:48 Starting row 500000 **** 19:45:58 Starting row 600000 **** 19:46:08 Starting row 700000 **** 19:46:18 Starting row 800000 **** 19:46:28 Starting row 900000 **** 19:46:38 Starting row 1000000 **** 19:46:39 Sending row 1015760 **** 19:46:39 Finished sending rows to the RDBMS /* 第2阶段,AMP内部进行排序*/ =================================================================== = = = End Loading Phase = = = =================================================================== 0013 END LOADING; **** 19:47:58 END LOADING COMPLETE Total Records Read = 1015760 - skipped by RECORD command = 0 - sent to the RDBMS = 1015760 Total Error Table 1 = 0 ---- Table has been dropped Total Error Table 2 = 0 ---- Table has been dropped Total Inserts Applied = 1015760 Total Duplicate Rows = 0 Start: Thu Mar 01 19:46:40 2007 End : Thu Mar 01 19:47:58 2007 2.Using fastload batch mode运行参数: 3.两个错误日志表区别: errortname1 • Constraint violations • Conversion errors • Unavailable AMP conditions These types of errors always occur during the loading phase of your FastLoad job—after executing the BEGIN LOADING command, but before the END LOADING command. errortname2 Unique primary index violations This type of error always occurs during the end-loading phase of your FastLoad job—after executing the END LOADING command. ARC 1.Archive and Recovery Utility (ARC) Teradata ARC可做如下事项: • 备份数据库,表,或者分区表选定的分区. • 恢复数据库,表,或者分区表指定的分区. • 拷贝已备份数据库,表,或者分区表选定的分区到不同硬件平台的Teratdata数据库上. • 在永久日志表上设置checkpoint入口. • 利用回滚或前滚技术将数据库恢复到任意的checkpoint(using change images from a journal table). • Delete change image rows from a journal table. Archive – captures user data on portable storage media. Restore – restores data from portable storage media. Recovery – recovers changes to data from permanent journal tables. ARC provides additional data protection for these situations: • Loss of an AMP’s Vdisk for no fallback tables • Loss of multiple Vdisks (AMPs) in the same cluster • Failed batch processes • Accidentally dropped tables, views or macros • Miscellaneous user errors • Disaster recovery Common uses for ARC: • Archive a database, individual table, or selected partitions of a PPI table (V2R6). • Restore a database, individual table, or selected partitions of a PPI table (V2R6). • Copy an archived database, table, or selected partitions of a PPI table (V2R6) to a Teradata Database on a different system. 2.Archive and Recovery Phases • Phase 1 — 数据字典阶段(Dictionary Phase) 1. Allocate an event number (from DBC.Next). 2. Issue a BEGIN TRANSACTION statement. 3. Resolve object name. 4. Check access rights. 5. Place locks: - Utility locks on data dictionary rows. - Utility locks on data rows. Note: READ locks on ARCHIVE; EXCLUSIVE locks on RESTORE. 6. Delete existing tables prior to RESTORE. 7. Issue an END TRANSACTION statement. • Phase 2 — 数据阶段(Data Phase) 1. Issue a BEGIN TRANSACTION statement. 2. Insert rows into RCEVENT and RCCONFIGURATION. 3. Perform the operation. 4. Update RCEVENT. 5. Release locks (if user specified). 6. Issue an END TRANSACTION statement. 3.Restore versus FastLoad • fastload步骤: - FastLoad uses a single session to send the INSERT statement to the PE and AMP vprocs. - Multiple sessions are then used to facilitate sending rows to the AMP vprocs. - Upon receipt, each AMP vproc hashes each record and redistributes it over the BYNET. This is done in parallel. - The receiving AMP vproc then writes these rows directly to the target table as unsorted blocks. - When loading completes, each AMP vproc sorts the target table, puts the rows into blocks, and writes the blocks to disk. - Then, fallback rows are generated if required. FastLoad operates only on tables with no secondary indexes. - You have to create any required indexes when the FastLoad is complete. • Restore步骤: 恢复到相同配置的操作: - Recovery of data blocks to the AMP vproc. - The blocks are already in the appropriate format. 恢复到不同配置的操作: - The block is first sent to the AMP vproc in the old configuration. - Then, it strips off its own rows and forwards (redistributes) the remainder of the block to the AMP vproc for the new configuration. Since the original rows were sorted in data blocks by RowID, the result is usually much faster than a normal redistribution. 上图可以说明一个事实,fastload的restore比fastload本身更快,原因如下: • FastLoad has to hash each row, redistribute every row, collect and write to disk, then read, sort, and write back to disk. • RESTORE copies blocks to the appropriate AMPs. • FastLoad operates on a table by table basis (one at a time). • RESTORE can restore all of the tables for one or more databases with a single job. 4.ARC There are several ways to invoke the Archive facility. • NetVault (from BakBone software) • NetBackup (from VERITAS software) – limited support • ASF2 (previous X Windows tape management utility – not supported with V2R5/V2R6) – ASF2 Tape Reader (ASF2TR) can be used to read previous ASF2 tapes • Command Line (execute arcmain) • Host or Mainframe arcmain脚本实例: --语法 LOGON dbc/sysdba,dbapass; ARCHIVE DATA TABLES (Database_1), ABORT, RELEASE LOCK, FILE = db1_data; LOGOFF; --dbc备份 .logon dbc,dbcpassword; Archive data tables (dbc),file = DBCDATA,release lock; .logoff; --数据字典备份 .logon dbc,dbcpassword; Archive dictionary tables (dbc) all,file = DBCDIC,release lock; .logoff; 5.Restart Log 备份/恢复的第一个动作是将脚本写入“restart log file”.此日志文件包括: • Current statement • Object list • Checkpoint positioning information • Checkpoint configuration information 如果此作业由于某种原因被中断,ARC工具会利用这个日志来重启archive,restore,或recovery等正在执行的动作. The RESTARTLOG (or RLOG) = filename runtime parameter can be used to specify a restart log name in UNIX MP-RAS and Windows systems. 6.Session Control 登录命令: 1. 导致两个session登入: 一个是SQL语句,一个是控制请求. -- 当遇见ARCHIVE或RESTORE命令时, ARC在运行时参数SESSIONS=nnn中吊起一个额外的session请求. 2. 识别用户及账户以分配资源. 3. 为Teradata数据库系统识别用户,以便相关软件核实从属关系并判定访问权限. CHECKPOINT Permits you to execute both the SQL and ARC utility checkpoint statements. DUMP Permits you to execute the ARC Archive statement RESTORE 允许执行下列ARC语句: Restore | Delete Journal | Rollforward | Release Lock* | Rollback Build 登出命令: 1. Ends all Teradata sessions logged on by the task, and 2. Terminates the utility. 要释放其他用户所持有的锁,你必须指定忽略(specify Override)并且持有删除潜在对象的权限. 7.Multiple Sessions 合适的session数目取决于以下因素: • number of AMPs • number of channel or LAN connections • speed and type of tape subsystem 对于小型系统,每个AMP一个session可以确保来自于所有AMP的数据块均匀分布. • Teradata将每个session分配到单个vproc上,session保持在此vproc上直到所有需要的数据存档,此后按照需要看要不要移动到其他vproc上. • Archive attempts to build blocks from each vproc in turn. The blocks are composed of complete database blocks. • Data blocks from different vprocs are never mixed within the same archive block. 8.ARC Statements LOGON Begins a session. LOGOFF Ends a session. ARCHIVE Archives a copy of a database or table to a host-resident data set/file. ANALYZE Reads an archive tape to display information about its content. RESTORE Restores a database or table from a archive file to specified AMPs. COPY Restores a copy of an archived file to a specified Teradata database system. BUILD Builds indexes and fallback data. RELEASE LOCK Releases host utility locks on databases or tables. DELETE DATABASE Deletes a database. CHECKPOINT Marks a journal for later archive or recovery activities. ROLLBACK Recovers a database and tables to a state that existed before some change. ROLLFORWARD Recovers a database or table to a state that existed after some change. DELETE JOURNAL Deletes SAVED or RESTORED Journal rows. REVALIDATE REFERENCES Revalidate referential integrity; a housekeeping or cleanup function. ARC options: • ARCDFLT – 这是指明包含system-wide默认参数值的环境变量: Example: SET ARCDFLT=C:TESTARCCONFIG.ARC CONFIG.ARC文件将包含有效的运行时参数变量,例如: SESSIONS=8 RESTARTLOG=C:TEMParcrlog1 • ARCENV – 这是定义任何有效的Teradata ARC运行是参数的环境变量. Example: SET ARCENV=RESTARTLOG=C:TEMParcrlog2 • ARCENVX – same as ARCENV, except that ARCENVX has the highest override priority. 9.ARCHIVE Statement 10.ARCHIVE Examples 例一: LOGON dbc/sysdba,dbapass; ARCHIVE DATA TABLES (PD) , ABORT , RELEASE LOCK , FILE = arc1_PD; LOGOFF; 例二: LOGON dbc/sysdba,dbapass; ARCHIVE DATA TABLES (PD) (EXCLUDE TABLES (dept_summary, phone_summary)) , ABORT , RELEASE LOCK , FILE = arc2_PD; LOGOFF; 例三: LOGON dbc/sysdba,dbapass; ARCHIVE DATA TABLES (Sysdba) ALL , ABORT , RELEASE LOCK , FILE = arc3_Sys; LOGOFF; 例四: LOGON dbc/sysdba,dbapass; ARCHIVE DATA TABLES (Sysdba) ALL (EXCLUDE TABLES (PD.dept_summary, PD.phone_summary)) , EXCLUDE (Demo), (Guest_Users) ALL, (Sandbox) , ABORT , RELEASE LOCK , FILE = arc4_Sys; LOGOFF; 例五: LOGON dbc/dbc,dbcpass; ARCHIVE DATA TABLES (DBC) ALL , ABORT, RELEASE LOCK , FILE = arc5_DBC; LOGOFF; 运行方式: arcmain < archive2_pd.arc 11.Archiving Selected Partitions of PPI Table V2R6.0起,可以对一个或多个分区执行全AMP备份: • 对所选分区备份只局限于全AMP备份; • Dictionary, cluster, and journal archives are not supported. 考虑因素: • Archiving selected partitions operates on complete partitions within tables. – Defining a partial partition means that the entire partition will be archived. – A restore operation always deletes the selected partitions of the target table before restoring the rows that are stored in the archive. • An archive or restore of selected partitions only places full-table locks. Locks on individual partitions are not supported. • Re-collect table statistics after a restore of selected partitions because statistics are part of the table dictionary rows, which are not restored during a partition-level restore. 分区备份实例: LOGON dbc/sysdba,dbapass; ARCHIVE DATA TABLES (TFACT.Sales_PPI) (PARTITIONS WHERE (!Sales_Date BETWEEN '2006-01-01' AND '2006-03-31'!)) , ABORT, RELEASE LOCK, FILE = arc6_PPI; LOGOFF; 12.ANALYZE Statement 作用:分析备份到磁带中的数据,可帮助用户恢复备份的部分数据而不是全部,运行此命令不需要登录数据库 语法: ANALYZE [ * | ALL | [ (Databasename) | (Dbname1) TO (Dbname2) ] [, ...] [ , DISPLAY [ LONG] | , VALIDATE ] , FILE = name ; 注意: • The ANALYZE statement instructs the ARC utility to read an archive file and display information about its content. • The LONG option displays all table, view, macro, trigger, and stored procedure names. – If an archive file contains a selected partition archive of a table (V2R6), the bounding condition used to select the archived partitions is displayed. • The VALIDATE option reads each record to check that each block on the archive file is readable. • ANALYZE doesn’t require a LOGON or LOGOFF statement. Example: analyze1_pd.arc (script name) ANALYZE (PD), DISPLAY LONG, FILE = arc1_PD; To execute: arcmain < analyze1_pd.arc 13.Types of Archives • ARCHIVE DATA TABLES – Fallback Tables, Views, Macros, Triggers, and Stored Procedures – No fallback tables – All AMP or cluster archive • ARCHIVE DICTIONARY TABLES – DD/D rows to complement cluster-level archive • ARCHIVE NO FALLBACK TABLES – Non-fallback tables – Archives AMP data missed during previous all AMP or cluster-level archive • ARCHIVE JOURNAL TABLES – Journal Tables ALL AMP Database ARCHIVE includes: • Data rows from the tables in the specific database(s). • Table structure information. • All table, column, and index definitions. • All views, macros, and triggers definitions. • Stored procedures. • Permanent journal information is not included. ALL AMP Table ARCHIVE includes: • Data rows from the table. • All dictionary information for the table. • All table, column, and index definitions. Specific AMPs or Cluster ARCHIVE includes: • Data rows from the table or tables within the specific database(s). • No dictionary rows. Dictionary ARCHIVE includes: • Dictionary rows for the object being archived.(Tables: TVM, TVFields, Indexes, IndexNames.) • Permanent journal information is not included. 说明: 因为分区备份不备数据字典信息,所以需要单独维护数据字典以恢复数据. 14.Archive Levels 除非用户指定特定的处理器或cluster备份,系统将执行全AMP级别的备份. – 分区备份可以将一个或多个分区的数据备份到一个数据文件中; – 分区备份作业可以并行运行也可以分开运行; – 单AMP的恢复会更快;数据字典需要单独备份; 单AMP备份只能在此AMP已经恢复服务时完成非fallback表的备份操作; 15.Archive Options • Release Lock – Utility locks automatically released upon successful operation completion • Indexes – Restricted to all-AMP dumps – Includes secondary indexes with archive – Requires more time and media • Abort – Fails ALL AMP or cluster dumps AND provides error messages if: > AMP vproc is off-line AND, > Dumped objects include no fallback tables, OR > Dumped objects include single-image journals • Non empty Database(s) – Excludes users/databases without tables, views, macros, triggers, or stored procedures from archive operation • Use Group Read Lock – Permits concurrent table archiving and transaction updates on locked rows – Requires after-image journaling of table 16.Indexes Option • ARCHIVE Operation – If all AMPs are online, then all indexes are archived. – If the table is fallback, then only unique secondary indexes are archived. – No indexes are dumped. • RESTORE Operation – If all AMPs are online, then all indexes are restored. – If the table is fallback, then only unique secondary indexes are restored. 17.Group Read Lock Option 作用:允许表在被修改的时候做备份操作(原因是此备份操作将数据修改日志一起做备份操作)。 限制条件: • Rolling archive while you are using a table. • Must have After-Image Journal defined. • Only valid for an all-AMPs archive. 操作结果: • All transactions will be included in the journal. • Journal must be archived. • Completed archive set includes data table archive and journal table archive. 18.Database DBC Archive DBC备份主要有下列内容: AccessRights Specification of all GRANTed rights AccLogRuleTbl Stores access logging specifications Accounts Lists all authorized account numbers CollationTbl Defines MULTINATIONAL collation DBase Definition of each DATABASE and USER Hosts Character set default override rules LogonRuleTbl User, host, password requirements Next Internal table for generating TABLE and DATABASE identifiers Owners Defines all databases owned by another Parents Defines the parent/child relationship between databases Profiles Defines Profiles(new with V2R5) Roles Defines Roles(new with V2R5) RoleGrants Contains Users and Roles granted to Roles (new with V2R5) RCConfiguration Records the configuration for the RCEvents rows RCEvent Records all archive and recovery events RCMedia Records all removable media used in archive activities 19.Summary • Archive and Recovery (ARC) is a command-line utility that performs three operations: archive, restore and recovery. • For small systems, the optimum number of sessions for archive and recovery operations is: – One per AMP vproc for archive – Two per AMP vproc for recovery • An archive operation can back up a single database or table, multiple databases or tables, or all databases. • Available archive levels are all-AMP, specific AMP and cluster archives. • The four types of archives are all-AMP database archive, all-AMP-table archive, specific-AMP or cluster archive and dictionary archive. Teradata参照B表修改A表 依照B表改A表: UPDATE tableA FROM tableB SET tableA.** = tableB.** WHERE tableB.** = tableA.** ; Teradata Dates and times Dates, times, and timestamps in Teradata (V2R4.1) can be a little tricky. This document explains how to do various things that you may want to do. Dates, times, and timestamps in Teradata (V2R4.1) can be a little tricky. This document explains how to do various things that you may want to do. Preliminaries The examples below use Teradata syntax, so they assume that you are running in BTEQ. If you are using Queryman, be sure to uncheck the option box "Allow use of ODBC SQL Extensions in queries". datecol means a column defined as DATE. Defining columns and formats Times and timestamps can be defined with any number of decimal places from 0 to 6 (time(0), timestamp(6), etc.). As it turns out, however, the Teradata hardware doesn’t keep track of anything beyond 2 decimal places (hundredths of seconds), so it is useless to define columns with more than 2 decimal places. The formats for all columns are described in the SQL Reference, Volume 3, Chapter 8. For dates, various combinations of YYYY, MM, DD, and so on are recognized, because those elements can be arranged in different ways. For instance, you can ask for a format of 'YYYY- MM-DD' or 'MMMbDD,bYYYY'. Time and timestamp fields have default formats that cannot be changed, because the order of the elements is fixed: hours always appear first, then minutes, and so on. Here are some guidelines: Date Type Length after Formatting (characters) time(0) 8 time(2) 11 timestamp(0) 19 timestamp(2) 22 Dates Number of days between two dates datecol - datecol will return the number of days between two dates. select date '2003-08-15' - date '2003-01-01'; (2003-08-15-2003-01-01) ----------------------- 226 Adding or subtracting months Generally speaking, you should use the ADD_MONTHS function to add months to a date (or to subtract months). Your project may require adding a number of days, but if calendar months are required, ADD_MONTHS is the way to go. select add_months(current_date, 3); ADD_MONTHS(Date, 3) ------------------- 2003-07-22 select add_months(current_date, -2); ADD_MONTHS(Date, -2) -------------------- 2003-02-22 /*** Last day of the month is still the last day ***/ select add_months(date '2002-01-31', 1); ADD_MONTHS(2002-01-31, 1) ------------------------- 2002-02-28 Computing the day of the week Computing the day of the week for a given date is not easy in SQL. If you need a weekday, I recommend that you look it up in the view sys_calendar.calendar (or join to it), thus: select day_of_week from sys_calendar.calendar where calendar_date = date '2003-05-01'; day_of_week ----------- 5 [i.e. Thursday] Computing the first day of a month select datecol - extract(day from datecol) + 1 This subtracts the number of days since the beginning of the month, taking you to "day 0", or the day before the first of the month; then adds 1. Computing the last day of a month select add_months((datecol - extract(day from datecol)+1),1)-1 Same idea, but this computes the first day of the following month and then subtracts 1. Special calendars Your business may have special requirements, such as a retail calendar that is always a multiple of weeks. Such data must be stored in a table and joined to in your queries. Times Changing floats to times Some Data Dictionary tables have time columns that are defined as FLOAT rather than TIME. Here’s how to convert them to TIMEs, believe it or not: select cast(cast(cast(TimeFld as format '99:99:99.99') as char(11)) as time(6)) from DBC.EventLog ... Time differences First we shall address the case where your time data is defined as a number (FLOAT or DECIMAL, perhaps) in hhmmss form—<; select ((time02 / 10000) * 3600 + (time02 / 100 MOD 100) * 60 + (time02 MOD 100)) - ((time01 / 10000) * 3600 + (time01 / 100 MOD 100) * 60 + (time01 MOD 100)) as time_diff from dttest2; time_diff ------------- 6432. [in seconds -- about 1.8 hours] If the earlier time could fall on one day and the later time on the next day, you may have to add 86,400 (the number of seconds in one day) to the later time, like so: select case when time02 >= time01 then ((time02 / 10000) * 3600 + (time02 / 100 MOD 100) * 60 + (time02 MOD 100)) - ((time01 / 10000) * 3600 + (time01 / 100 MOD 100) * 60 + (time01 MOD 100)) else /*** Midnight has passed ***/ (((time02 / 10000) * 3600 + (time02 / 100 MOD 100) * 60 + (time02 MOD 100)) + 86400) - ((time01 / 10000) * 3600 + (time01 / 100 MOD 100) * 60 + (time01 MOD 100)) end as time_diff from dttest2; time_diff ------------- 18094. Next we consider the case where your time data is defined as TIME(n). The usual way to take the difference of two times would be as follows: select time02 - time01 hour(2) to second from dttest3; (time02 - time01) HOUR TO SECOND -------------------------------- 5:02:40.000000 The above result has a data type of INTERVAL. If, however, you want to compute the difference in seconds, as above, you again have to split the times up: select (extract(hour from time02) * 3600 + extract(minute from time02) * 60 + extract(second from time02)) - (extract(hour from time01) * 3600 + extract(minute from time01) * 60 + extract(second from time01)) as time_diff from dttest3; time_diff ----------- 18160 /*** After midnight ... ***/ select case when time02 >= time01 then (extract(hour from time02) * 3600 + extract(minute from time02) * 60 + extract(second from time02)) - (extract(hour from time01) * 3600 + extract(minute from time01) * 60 + extract(second from time01)) else (extract(hour from time02) * 3600 + extract(minute from time02) * 60 + extract(second from time02) + 86400) - (extract(hour from time01) * 3600 + extract(minute from time01) * 60 + extract(second from time01)) end as time_diff from dttest3; time_diff ----------- 61360 Timestamps Extracting the date or time portion of a timestamp Extract the date or time portion of a timestamp thus: select cast(ts01 as date) from dttest; ts01 -------- 03/08/15 select cast(ts01 as time(0)) from dttest; ts01 -------- 03:04:05 Length of time between two timestamps You can subtract one timestamp from another. The result will be an interval, and you must specify a precision for the interval, like so: select ts01 - ts04 day(4) to second(0) from dttest; (ts01 - ts04) DAY TO SECOND --------------------------- 226 02:02:02 select ts04 - ts01 day(4) to second(0) from dttest; (ts04 - ts01) DAY TO SECOND --------------------------- -226 02:02:02 You can also convert this interval to seconds or minutes like so: /*** Difference in seconds ***/ select (ts01 - ts04 day(4) to second) as tsdiff, (extract(day from tsdiff) * 86400) + (extract(hour from tsdiff) * 3600) + (extract(minute from tsdiff) * 60) + extract(second from tsdiff) as sec_diff from dttest; tsdiff sec_diff ------------------ -------------- 2 07:11:24.000000 198684.000000 /*** Difference in minutes ***/ select (ts01 - ts04 day(4) to minute) as tsdiff, (extract(day from tsdiff) * 1440) + (extract(hour from tsdiff) * 60) + extract(minute from tsdiff) as min_diff from dttest; tsdiff min_diff -------- --------- 2 07:11 3311 Number of days between two timestamps If you just want the number of days between two timestamps and wish to ignore the time portion, either of the following two techniques will work, but note the differences. The first technique lops off the time portion of each timestamp, so it will be equivalent to subtracting the two days; the result is an integer. The second will take the time portion into account and return an interval, so it will not count periods of time less than 24 hours. Thus, the result could be one less than with the first technique. If you use the second technique, be sure to allow enough digits for DAY. sel cast(ts01 as date) - cast(ts04 as date) from dttest; (ts01-ts04) ----------- 226 -- type of this result is INTEGER sel ts01 - ts04 day(4) from dttest; (ts01 - ts04) DAY ----------------- 226 -- type of this result is INTERVAL DAY Other operations on timestamps Operand 1 Operator Operand 2 Result Type Timestamp + or - Interval Timestamp Interval + Timestamp Timestamp Interval + or - Interval Interval Interval * or / Numeric Interval Numeric * Interval Interval References Teradata RDBMS SQL Reference, Volume 3: Data Types and Literals, B035-1101-061A (June 2001). Teradata 时间函数 时间差异(月):select (date '2003-08-15' - date '2003-01-01') month; 时间差异(天):select date '2003-08-15' - date '2003-01-01'; 时间差异(秒):select ((time02 / 10000) * 3600 + (time02 / 100 MOD 100) * 60 + (time02 MOD 100)) - ((time01 / 10000) * 3600 + (time01 / 100 MOD 100) * 60 + (time01 MOD 100)) as time_diff 月份操作:select add_months(current_date, 3); ADD_MONTHS(Date, -2) add_months(date '2002-01-31', 1); 本月第一天:select date - extract(day from date) + 1 上月最后一天:select date '2008-03-03' - extract(day from '2008-03-03') decemal转成time:select cast(cast(cast(TimeFld as format '99:99:99.99') as char(11)) as time(6)) from DBC.EventLog 依照当前时间得到timestamp: select cast(current_date as timestamp(2)) + ((current_time - time '00:00:00') hour to second); 上个星期五:select * from sys_calendar.calendar where day_of_week = 6 --星期五 and calendar_date between date -6 and date; --上个星期 上周的今天:select * from sys_calendar.calendar where calendar_date = date-7 oleload导excel数据 看见同事为了导excel数据到数据库中,都是写个fastload脚本或者tpump脚本,这样在整数据格式的时候是很费时费力的操作,殊不知,teradata提供的oleload工具可以直接导excel数据到teradata库当中,操作简单: 1、select souce选项卡里面选择:Microsoft.Jet.OLEDB.4.0,会弹出对话框: a、“连接”选项卡:“选择或数据数据库名称”里面选择将要导入的数据源excel;用户名/密码可以不管; b、“高级”选项卡保持默认; c、“所有”选项卡:Extended Properties填上:Excel 8.0;IMEX=1 2.目标库当然选Teradata; 后面的操作不用说了,setting里面可以设置目标表名,编码等; lanch里面可以根据需要选择fastload,multiload等。 说明:目前国内的teradata版本只支持.xls为扩展名的文件,也就是说不支持excel2007文件。 Teradata访问权限及实体类型缩写 Teradata中有一些缩写,分别代表特定的访问权限或表类型,整理如下: 访问权限: 'AE'-->'ALTER EXTERNAL PROCEDURE'; 'AF'-->'ALTER FUNCTION'; 'AP'-->'ALTER PROCEDURE'; 'AS'-->'ABORT SESSION'; 'CA'-->'CREATE AUTHORIZATION'; 'CD'-->'CREATE DATABASE'; 'CE'-->'CREATE EXTERNAL PROCEDURE'; 'CF'-->'CREATE FUNCTION'; 'CG'-->'CREATE TRIGGER'; 'CM'-->'CREATE MACRO'; 'CO'-->'CREATE PROFILE'; 'CP'-->'CHECKPOINT'; 'CR'-->'CREATE ROLE'; 'CT'-->'CREATE TABLE'; 'CU'-->'CREATE USER'; 'CV'-->'CREATE VIEW'; 'D'-->'DELETE'; 'DA'-->'DROP AUTHORIZATION'; 'DD'-->'DROP DATABASE'; 'DF'-->'DROP FUNCTION'; 'DG'-->'DROP TRIGGER'; 'DM'-->'DROP MACRO'; 'DO'-->'DROP PROFILE'; 'DP'-->'DUMP'; 'DR'-->'DROP ROLE'; 'DT'-->'DROP TABLE'; 'DU'-->'DROP USER'; 'DV'-->'DROP VIEW'; 'E'-->'EXECUTE (MACRO'; 'EF'-->'EXECUTE FUNCTION'; 'I'-->'INSERT'; 'IX'-->'INDEX'; 'MR'-->'MONITOR RESOURCE'; 'MS'-->'MONITOR SESSION'; 'PC'-->'CREATE PROCEDURE'; 'PD'-->'DROP PROCEDURE'; 'PE'-->'EXECUTE PROCEDURE'; 'R'-->'RETRIEVE/SELECT'; 'RF'-->'REFERENCE'; 'RS'-->'RESTORE'; 'RO'-->'REPLICATION CONTROL or REPLICATION OVERIDE'; 'SR'-->'SET RESOURCE RATE'; 'SS'-->'SET SESSION RATE'; 'U'-->'UPDATE'; 'UM'-->'UDT METHOD'; 'UT'-->'UDT TYPE'; 'UU'-->'UDT USAGE'; 实体类型: 'A'-->'AGGREGATE UDF'; 'B'-->'COMBINED AGGREGATE AND ORDERED ANALYTICAL FUNCTION'; 'E'-->'EXTERNAL STORED PROCEDURE'; 'F'-->'SCALAR UDF'; 'G'-->'TRIGGER'; 'H'-->'INSTANCE OR CONSTRUCTOR METHOD'; 'I'-->'JOIN INDEX'; 'J'-->'JOURNAL'; 'M'-->'MACRO'; 'N'-->'HASH INDEX'; 'P'-->'STORED PROCEDURE'; 'Q'-->'QUEUE TABLE'; 'R'-->'TABLE FUNCTION'; 'S'-->'ORDERED ANALYTICAL FUNCTION'; 'T'-->'TABLE'; 'U'-->'USER-DEFINED DATA TYPE'; 'V'-->'VIEW'; 'X'-->'AUTHORIZATION'; DOS,UNIX中的回车换行区别 这几天写了个多线程的数据抽取perl脚本,速度很快,可达到1G/m以上。可是今天做fastload加载时,被破了瓢冷水: 抽过来的所有的文件都无法正常的加载,表现的形式是我抽取的数据始终要比BOSS送过来的数据大那么一点点,但记录 条数没有异常。于是我开始了艰难的查错之旅: 1.用UE查看数据文件,无明显异常: 2.用cygwin查看各条记录的长度,输出长度不对的行: awk '{ if ( length($0) !=行字节数) {print $0;print NR} }' xxx.dat 结果令我失望,所有行的长度均正常; 3.编写一perl脚本,比较我抽取的数据文件和正确文件的差异: #!/user/bin/perl use strict; use File::Basename; use POSIX qw(strftime); open(IN, "E:ETLDATAprocessQ01_CUST_PERSON_INFO_T_20090203.dat") or die "open error: $!"; my @x = ; close (IN); open(IN, "E:ETLDATAprocessQ01_CUST_PERSON_INFO_T_20090203_1.dat") or die "open error: $!"; my @y = ; close (IN); chomp(@y); my $p = join('|', map{quotemeta} @y); #print (grep /$p/, @x); print (grep ! /$p/, @x); 五分钟后,运行完毕,结果令人沮丧,输出日志大小为0; 4.我开始怀疑是不可见字符引起的异常,可是这个怎么查呢? 5.新的一天开始了,有高手过来,机会难得,开问。高手就是高手,听了我的描述,他决定用UE查看文件的十六进制编码, 用UE打开,弹出窗口:"转换成DOS模式",点“是”,比较两文件的十六进制编码,发现完全一样! 怎么办呢?先看看大小差异有没有什么规律: 一行数据可见字符的长度为388*25538=9908744 正常:9934282 = 9908744 + 25538 异常:9959820 = 9934282 + 25538 也就是说正常情况下不可见字符数为每行1个,而我抽取的为每行2个 再用UE打开,弹出窗口:"转换成DOS模式",点“是”,比较两文件的十六进制编码,重点关注不可见的回车换行符,发现还是完全一样!怎么回事呢?突然想起BOSS的数据库环境为UNIX,是不是编码不一样呢?找到UE菜单中“DOS到UNIX”转换,转换格式,保存。发现大小变为正确值,fastload,成功。突然明白,查看十六进制编码完全一样的罪魁祸首在于惯性的点击UE弹出的"转换成DOS模式"。验证一下,点击弹出窗口的“否”,果然发现区别,抽取数据的回车换行的编码为“0A0D”,正确数据为“0D”。 结论:dos与unix在文件中换行字符的表现机置存在差异: dos格式下,切换一行是二个字符的,由回车(od),换行(oa),文件尾部直接EOF(文件结束标志) 而unix下,切换一行,使用的是单换行符(oa),文件最后一行也会增加该字符,然后才是eof 验证如下: windos下建一个文件,1.txt,输入: afew aef(到此打住) 使用ultraedit的十六进制查看,或者上传到unix机,使用:"xxd 文件名"以十六进制查看文件,结果如下: 0000000: 6166 6577 0d0a 6165 66 afew..aef 增加了Od,Oa二个字符 而在unix下,简单创建一个同样的文件:"vi 2.txt",输入 afew aef(同样到此打住) 使用"xxd 2.txt" 查看: 0000000: 6166 6577 0a61 6566 0a afew.aef. 可以看到,第一行后边只有一个"oa" 换行符,同时最后一行也自动增加了oa字符 现在用wc查看下二个文件的情况: >wc -l 1.txt 2.txt 1 2 9 1.txt 2 2 9 2.txt 3 4 18 total 可以看到,二个文件的字符数是一样的,这个上边已经分析. 使用UE转化工具dos2unix,功用是把dos的二个切换符转化成unix的单个字符 >dos2unix 1.txt >wc 1.txt 1 2 8 1.txt >xxd 1.txt 0000000: 6166 6577 0a61 6566 afew.aef 发现回车符已经被去掉了., 同样使用转化工具unix2dos,功用是把unix下的单换行字符转化为dos下的二个字符: >unix2dos 2.txt >wc 2.txt 2 2 11 2.txt >xxd 2.txt 0000000: 6166 6577 0d0a 6165 660d 0a afew..aef.. 可以看到,二个单换行符已经转化成dos下的四个字符,文件大小也多了2个字符. 现在继续使用wc查看下这二个文件情况: >wc 1.txt 2.txxt 1 2 8 1.txt 2 2 11 2.txt 3 4 19 total 现在文件的大小问题 ,已经换行符文题已经解决了,再来关注一下unix下一些工具所受到的影响 ,仔细看可以发现一个问题 :不管是什么格式的,在windows下生成的和在dos下生成的,行数都差一,dos下创建的文件最后一行哪里去了?没有统计到吗? 是的,的确没统计到,wc根据行未的oa字符判断的行数,所以结果是最后一行被忽略了.而dos2unix仅是对已有的0a/0d到oa之前的转化,对于windows下创建的文件进行dos2unix,也不会在最后一行里增加oa字符,所以即使用dos2unix后,文件wc结果也会少一行 同样受到影响的还有while语句 >while read line; do echo $line; done <1.txt afew 可以看到,结果也只打出了一行, sed就不受这个的影响了: >sed 's/a/ccc/g' 1.txt cccfew cccef 再试awk命令: >awk '{print $0}' 1.txt afew aef 也没有受到影响 目前常用到的shell命令,就是wc/while语句,受到影响. spool空间 spool空间的最大值得依赖于表大小以及其使用频率。 Spool空间在创建用户的时候指定,其值可以依据需要变动。 没有计算spool的公式, 某个session使用的spool空间可以在DBC.DISKSPACE视图中查找。 下面是查看某用户spool空间的sql: SELECT DATABASENAME (Title 'User') ,sum(MaxSpool) (format '---,---,---,---,--9')(char(19))(title 'max Spool ') ,sum(PeakSpool) (format '---,---,---,---,--9')(char(19))(title 'Peak Spool ') ,sum(Maxtemp) (format '---,---,---,---,--9')(char(19))(title 'max temp ') ,sum(PeakTemp) (format '---,---,---,---,--9')(char(19))(title 'Peak Temp ') FROM DBC.DISKSPACE WHERE DatabaseName = 'Joe_User' ORDER BY 1 GROUP BY 1; 下面的sql可以提供分配给某用户的spool空间及临时空间信息: SELECT A.DatabaseName (CHAR(20)) , A.OwnerName (CHAR(20)) , A.AccountName (CHAR(20)) , B.DefaultDatabase (CHAR(20)) , (A.SpoolSpace / 1000000000) (NAMED Spool_In_gig) , (A.TempSpace / 1000000000) (NAMED Temp_In_gig) FROM DBC.DATABASES A , DBC.DBASE B WHERE A.DatabaseName = B.DatabaseNamei AND A.DatabaseName = 'Joe_User' ORDER BY 1; 修改某用户spool,temp空间的方法很多,下面方法1: create macro reset_pk_spool as ( UPDATE dbc.DataBaseSpace SET PeakSpoolSpace = 0 where databaseid = '0000841D'XB /* Userid NBD7UZX */ and tableid = '000000000000'XB ; ); 用databaseid及tableid来操作的原因是UPI update效率较高,并且锁范围最小。 (Table DBC.DATABASESPACE is a non-hashed table. A non-hashed system table appears to behave the same as a hashed table, but the data is actually local to the AMP - no matter the definition of the primary index. This means that you can't use the PI to access a non-hashed table. Instead, a full-table (or all rows) scan must be performed. ) 可用如下sql来获取databaseid: locking table dbc.DBase for access select * from dbc.DBase where DatabaseNameI = 'NBD7UZX'; The tableid of low values ('000000000000'XB) is for the userid. 用这个方法的时候千万小心,最好是在需要的时候创建此macro,操作完成后删掉,以免误操作引发灾难。 miguelmin 发表于:2009.02.22 14:08 ::分类: ( Teradata ) ::阅读:(140次) :: 评论 (1) :: 引用 (0) 更正 [回复] 表DBC.DATABASESPACE是一个非哈希表。非哈希表的执行看起来类似哈希表,但是它的执行是用不到PI的,不管你有没有定义PI,所以上述的UPDATE仍然会做全表扫描。 Teradata join 1.SELECT Statement ANSI Join Syntax 版本V2R2以后,Teradata支持ANSI join语法及外连接: SELECT colname [, colname , …] FROM tabname [aname] [INNER] JOIN LEFT [OUTER] JOIN RIGHT [OUTER] JOIN FULL [OUTER] JOIN CROSS JOIN tabname1 [aname] ON condition ; INNER JOIN: 所有匹配的行. LEFT OUTER JOIN: 以左边表为准,右边不能匹配的行填NULL. RIGHT OUTER JOIN: 以右边表为准,左边不能匹配的行填NULL. FULL OUTER JOIN : Both tables are used to qualify and extended with nulls. CROSS JOIN : Product join or Cartesian product join. 2.Example of ANSI and Teradata JOIN Syntax ANSI语法: SELECT D.Department_Number AS "Dept Number" ,D.Department_Name AS "Dept Name" ,E.Last_Name AS "Last Name" ,E.Department_Number AS "Emp Dept" FROM Department D INNER JOIN Employee E ON E.Department_Number = D.Department_Number; Teradata语法: SELECT D.Department_Number AS "Dept Number" ,D.Department_Name AS "Dept Name" ,E.Last_Name AS "Last Name" ,E.Department_Number AS "Emp Dept" FROM Department D ,Employee E WHERE E.Department_Number = D.Department_Number; 注:上述sql虽没有明确定义为inner join,但Teradata解析器仍会将其解释为inner join. 3.Join Processing 表与表之间的left outer join主要有以下形式: – product join乘积连接(Product Join) – merge join merge join :合并连接。读入两个表,分别排序后连接 – hash join hash join :哈希连接。将其中一个表读入内存,通过hash计算和另一个表建立连接 – nest join 嵌套关联(Nested Joins) 嵌套连接(Nested loop) – exclusion join 而较常见的是product join和merge join。 表连接方式NESTED LOOP、HASH JOIN、SORT MERGE JOIN 表连接方式及使用场合 NESTED LOOP 对于被连接的数据子集较小的情况,nested loop连接是个较好的选择。nested loop就是扫描一个表,每读到一条记录,就根据索引去另一个表里面查找,没有索引一般就不会是 nested loops。 一般在nested loop中, 驱动表满足条件结果集不大,被驱动表的连接字段要有索引,这样就走nstedloop。如果驱动表返回记录太多,就不适合nested loops了。如果连接字段没有索引,则适合走hash join,因为不需要索引。 可用ordered提示来改变CBO默认的驱动表,可用USE_NL(table_name1 table_name2)提示来强制使用nested loop。 HASH JOIN hash join是CBO 做大数据集连接时的常用方式。优化器扫描小表(或数据源),利用连接键(也就是根据连接字段计算hash 值)在内存中建立hash表,然后扫描大表,每读到一条记录就来探测hash表一次,找出与hash表匹配的行。 当小表可以全部放入内存中,其成本接近全表扫描两个表的成本之和。如果表很大不能完全放入内存,这时优化器会将它分割成若干不同的分区,不能放入内存的部分就把该分区写入磁盘的临时段,此时要有较大的临时段从而尽量提高I/O 的性能。临时段中的分区都需要换进内存做hash join。这时候成本接近于全表扫描小表+分区数*全表扫描大表的代价和。 至于两个表都进行分区,其好处是可以使用parallel query,就是多个进程同时对不同的分区进行join,然后再合并。但是复杂。使用hash join时,HASH_AREA_SIZE初始化参数必须足够的大,如果是9i,Oracle建议使用SQL工作区自动管理,设置WORKAREA_SIZE_POLICY 为AUTO,然后调整PGA_AGGREGATE_TARGET即可。 以下条件下hash join可能有优势: 两个巨大的表之间的连接。 在一个巨大的表和一个小表之间的连接。 可用ordered提示来改变CBO默认的驱动表,可用USE_HASH(table_name1 table_name2)提示来强制使用hash join。 SORT MERGE JOIN sort merge join的操作通常分三步:对连接的每个表做table access full;对table access full的结果进行排序;进行merge join对排序结果进行合并。sort merge join性能开销几乎都在前两步。一般是在没有索引的情况下,9i开始已经很少出现了,因为其排序成本高,大多为hash join替代了。 通常情况下hash join的效果都比sort merge join要好,然而如果行源已经被排过序,在执行sort merge join时不需要再排序了,这时sort merge join的性能会优于hash join。 在全表扫描比索引范围扫描再通过rowid进行表访问更可取的情况下,sort merge join会比nested loops性能更佳。 可用USE_MERGE(table_name1 table_name2)提示强制使用sort merge join。 还有平时使用最多的嵌套连接(Nested loop) 做join的行必须在相同的AMP上: • 如果需要,系统将创建一行或多行的spool拷贝并将其移动到公共AMP • 连接处理从不移动或改变任何源表数据. 优化器基于以下原则选择最好的join方式: • 有用的索引 • 统计信息 (COLLECTed STATISTICS or Dynamic Sample) 四种join的区别可类似于两数组的比较来解释: 假定有A,B两个数组,现在需要得到两数组中的相同元素,有以下几种方法: 算法1: A,B均先排序,再做比较 算法复杂度:o(nlogn)-------->Merge join 算法2: A,B均不排序,做比较 算法复杂度:o(n^2)-------->Product join 算法3: A排序,B不排序,B通过二分法做比较 算法复杂度介于前两者之间------>Hash join 注:选择merge join 还是product join主要是看比较时间长还是排序时间长. 4.Optimizer Minimizes Spool Usage 优化器利用以下方法将spool空间最小化: • 仅投影(拷贝)查询需要的列. • 先做单表关联(减少数据行). • 只要可能,只将小表置于spool空间. 优化器在做join前将spool空间最小化: • Applies SET conditions first (WHERE). • Only the necessary columns are used in Spool. 5.Row Selection 只要可能,在join时加上行选择条件 • 列投影永远先于join,行选择一般先于join. • 技巧:减少参与join的行数可大大提升join效率. 6.Join Redistribution 两表关联,若被关联的两个字段均不为二表的PI,则需要做下列操作之一: 1>重新分布:将被关联的字段设为PI,数据做重新分布; 2>小表复制:将小表的内容复制到一个AMP上,最终所有AMP均包含小表所有数据 两表做Merge join时,有三种情况: • 最好情况:参与join的列分别是两表的PI,join操作可立刻执行. -- 数据已经在相同的AMP上,不需要数据转移到其他AMP上; -- 数据行已经按照哈希排序存储,不需要做排序操作. • 次之:参与join的列有一个是表的PI,另一个不是. -- 非PI表数据必须按照参与join的列值做hash重分布到目标AMP上; -- 如果是小表,将全表复制到所有AMP上以便做join; -- 如果表较大,将涉及到的行拷贝到目标AMP上. • 糟糕情况:参与join的列均不是表的PI. -- 两个表将均依照join字段做重分布(或小表复制) 7.Duplicating a Table in Spool • 对merge join,优化器可能选择复制小表到每个AMP. • 对product join,优化器总会复制一个表到所有AMP. • 对以上两种情况,每个AMP都必须要有足够的空间以保证数据复制完成. 8.Merge Join 被关联的数据行必须在相同AMP上: • Merge Join reads blocks from both tables only once. • Usually chosen for an equality join condition. • Generally more efficient than a product join. Merge join步骤: • 识别小表. • 如果需要的话: – Put qualifying data of one or both tables into spool(s). – Move the spool rows to AMPs based on the join column hash. – Sort the spool rows into join column hash sequence. • 利用hash值做数据匹配. 9.嵌套关联(Nested Joins) • 一种特殊的关联. • 唯一的一种不会总是涉及到所有AMP的关联. • 从系统资源方面来说,这是最高效的一种方式. • OLTP应用的最好选择. • 优化器必须拥有以下信息才会选择Nested Join: -- 第一个表中(UPI或USI)的等值. -- 第二个表中相对于索引具有唯一值的列. • 系统从表1中得到唯一数据行. • 利用hash值得到表2中匹配的数据行. example: SELECT E.Name,D.Name FROM Employee E INNER JOIN Department D ON E.Dept = D.Dept WHERE E.Enum = 5; 10.乘积join(Product Join) 下列情况均可产生product join: • The WHERE clause is missing. • A Join condition is not based on equality (<>, LESS THAN, GREATER THAN). • Join conditions are OR together. • There are too few Join conditions. • A referenced table is not named in any Join condition. • Table aliases are incorrectly used. • The Optimizer determines that it is less expensive than the other Join types. • Does not sort the rows. • May re-read blocks from one table if AMP memory size is exceeded. • It compares every qualifying Table1 row to every qualifying Table2 row. • Those that match the WHERE condition are saved in spool. • It is called a Product Join because:Total Compares = # Qualified Rows Table 1 * # Qualified Rows Table 2 • The internal compares become very costly when there are more rows than AMP memory can hold at one time. • They are generally unintentional and often give meaningless output. • Product Join process: -- Identify the Smaller Table and duplicate it in spool on all AMPs. -- Join each spool row for Smaller Table to every row for Larger Table. 11.Cartesian Product • This is an unconstrained Product join. • Each row of Table1 is joined to every row in Table2. • Cartesian Product Joins consume significant system resources. • Cartesian Product Joins rarely have practical business use. • Cartesian Product Joins frequently occur when: -- A join condition is missing or there are too few join conditions. -- Join conditions are not based on equality. -- A referenced table is not named in any join condition. -- Table aliases are incorrectly used. • The transaction aborts if it exceeds the user’s spool limit. 12.Hash Join Hash Join:小表依照row hash顺序存储,然后重分布或复制到所有的AMP.然后大表开始每行一条的处理数据,所以不需要将大表按照hash顺序存储. 可用以下DBS空值列来打开hash join和为hash join分配空间: • HTMemAlloc • SkewAllowance 优化技术可有效的将小表放入cache,然后将其与spool中未排序的大表做join. Row Hash Join Process: • Identify the smaller table. • Redistribute or duplicate the smaller table in memory across the AMPs. • Sort the cache memory into join column row hash sequence. • Hold the rows in memory. • Use the join column row hash of the larger table to binary search memory for a match. 这类join避免大表排序,但有时会大表复制或重分布. 13.Exclusion Joins • 返回无匹配记录的行. • 可能是merge或product join. • 导致NOT IN字句和EXCEPT操作. • 在可空列上使用3种逻辑值. • 如果可能在建表时对可能出现NOT IN操作的列加上NOT NULL属性. • 说明: 在可空字段上做join时加上 WHERE colname IS NOT NULL 条件. Exclusion Joins(NOT IN)的三条规则: • Any True – 滤掉 • Any Unknown – 滤掉 • All False – 保留. 14.(多表关联)n-Table Joins • 优化器一次只能处理处理两个表. • join的结果再与第三个表做join. • 所有的多表关联都会被分解为两表连接. • 优化器自动尝试决定最好的join顺序. • 收集关联字段统计信息可以帮助优化器做好的选择. SELECT …. FROM Table_A, Table_B, Table_C, Table_D WHERE . . . ; 15.Join Considerations with PPI PPI是基于Teardata已有应用的木块扩展,因此,所有的join算法均支持PPI 同NPPI相比,用PPI时,如果在查询约束中,大数目的分区没有消除,Row Hash Merge Join的性能可能会比较糟. 原因: • 同NPPI相比,PPI的Row Hash Merge Join算法比较复杂并且需要更多的资源(假设有效数据块数目相等). • since rows are not in hash order, but rather in partition/hash order. 对两个PPI不同的表做关联,对一个PPI表和一个NPPI表做关联,有以下三种方法: • One option is to spool the PPI table (or both PPI tables) into a non-PPI spool file in preparation for a traditional merge join. • A second option (not always available) is to spool the non-PPI table (or one of the two PPI tables) into a PPI spool file, with identical partitioning to the remaining table, in preparation for a rowkey-based merge join. • The third approach is to use the sliding window join of the tables without spooling either one. The optimizer will consider all reasonable join strategies, and pick the one that has the best-estimated performance. NPPI与PPI做等值连接: 1)、分区较少的情况: • Teradata将保持NPPI表中的数据块,PPI表中的数据每个分区一块,以提高join性能. • 即使没有分区被消除,性能与NPPI表到NPPI表的连接相当(假定分区极少) – 相同数目的磁盘I/O (排除非常规情况 - 一个hash值对应多个数据行) – 内存需求高 – CPU利用稍高 – 如果加上分区限制,查询效率将要高得多. 2)、分区较多的情况 • Teradata保持NPPI表的块,将PPI表的块尽可能多的装入内存,利用滚动窗口技术让连接效率最佳 • 这类连接的效率一般NPPI到NPPI表的join要低,除非分区限制可以很大缩减工作量 -- 更多数目的磁盘I/O (NPPI表的数据块必须重复扫描多次). NPPI与PPI表join的I/O数目是:(p/k * d1) + d2 NPPI与NPPI表join的I/O数目是:d1 + d2 (说明: d1 = NPPI表的数据块数目 d2 = PPI表的数据块数目 p = 参与join的分区数目 k = PPI表可以加到内存中的分区数 ) -- 需要更多的内存 -- 需要更多的CPU资源 • 为了得到较好的性能,在查询中尽量多的限制以消除最多的分区 -- p/k的值要尽量的小,即往内存中加入数据的次数尽量少 3)、滑动窗口 最直接的连接NPPI与PPI表的方法是将NPPI表与PPI表的逐个分区做关联,也就是将整个关联变化成一些列的子关联. 这样操作的效率可能较低,特别是NPPI表特别大时 16.Join Processing Summary product join和merge join的概念 1)product join就是乘积关联,做法是对左表的每一条记录都跟右表的所有记录进行关联(即所说的笛卡尔积),然后根据关联字段,筛出符合关联条件的记录。此法的缺点是作此关联所付出的系统的CPU和IO代价较大(由于笛卡尔积的关系) 但相对与merge join来说它的优点是左表和右表都不需要排序,因此teradata优化器只在右表记录数和导出字段字节数乘积较小的情况下(一般凭经验,记录数小于25条以下,字段字节数没测过)或者关联条件包含非等式关联或者关联条件中带有OR时才选择product join; 2)merge join就是合并关联,做法是对左右表分别进行排序,然后根据关联条件进行匹配关联。其优点时每一个左表记录只会跟右表的一条或某几条记录进行关联,大大减少了CPU和IO的花费,但缺点是对于右表记录数较少的情况下,对非常大的左表进行排序较浪费数据库资源。总的说来,merge join是一种有较高效率的关联算法,因此teradata优化器一般情况下都采用merge join,只在右表记录数和导出字段字节数乘积较小的情况下采用product join 重分布还是全拷贝 由于teradata数据库是一种分布式并行数据库,记录是根据特定字段的值通过哈西计算后分布在多个AMP上,因此要实现关联运算,除了采用何种关联算法(即采用product join还是merge join,还是。。。)的问题,还有左表和右表如何分布到多个AMP上的问题。一般情况下表是根据PI值做分布,但是关联时不是。 1)对于右表记录数较少的情况下,teradata优化器会将右表的记录在每一个AMP上都复制一份,也就是说如果数据库有50AMP,10条记录的表经过拷贝后在数据库中就有500记录,此种做法就是全拷贝。可以想象此种做法同product join一样,在右表记录数较多情况下,比较浪费系统的IO资源和SPOOL空间,但是它也有好处,好处是它能够避免左表的重分布(下面提到的概念)。 2)如果右表是分区导出表,teradata优化器除了主索引关联的情况下,无法如何总是采用全拷贝的方法分布右表. 3)如果右表记录数较多,同时不是分区导出表的情况下,teradata优化器会将左右表根据关联字段重新计算HASH值,然后根据HASH值做AMP的分布,即关联字段替代了PI成为SPOOL空间中关联表的主索引.此法就叫做重分布.其优点是减低了系统的IO资源和SPOOL的空间的使用(表记录数在重分布前后不变),但缺点是由于重分布,导致了节点间的数据移动和bynet的负荷增大,同时导致了AMP数据分布的不平均。 多关联SQL 对于一个左表(主表)同多个右表(维表,关联表)关联时,teradata总是一个一个的关联,即先拿左表同右表1关联生成新的左表,然后再拿新的左表同右表2关联生成新的左表,如此反复,直到右表都关联完毕。此法的效率相当有问题。当左表较大,而右表较小的情况,每做一次关联,就要进行哈希重分布或全拷贝,同时还进行一次表的全扫描,资源浪费较大。 关于优化的一些建议 1)对于左表极大而右表的记录数较少的情况下,建议通过使用导出分区表强制teradata优化器使用全拷贝,避免了左表的重分布和数据分布不均匀 2)对于右表极大但是又是分区导出表的情况下,建议采用无分区的临时表,先将分区导出表插入到临时表,通过左表和临时表进行关联,以避免teradata优化器使用全拷贝的方式 3)多于多关联SQL,可以建立临时表,先将多个很小的右表通过笛卡尔积形成大表(注意控制大表条数,一般控制在千万级效果较佳),然后再将左表同关联好的右表进行关联。 collect statistics 收集统计信息对于Teradata sql执行计划的影响是非常显著的。因此,Teradata数据库DBA都会周期性的收集统计信息。如何确定哪些对象需要收集信息呢?总结起来就一句话:对经常作为关联条件或经常作为取数条件的字段收集信息。理论上,统计信息收集的越详细,sql的执行计划越优化。实际操作中,为了避免数据量过大而影响日常加载,可以采取抽样收集信息的方式。 还有一个问题,我怎么知道目前数据库中统计信息的收集情况呢?下面是查看系统当前统计信息情况的sql: SELECT DatabaseName, TableName, ColumnName, StatsType, CollectDate, CollectTime, CASE SampleSize WHEN 0 THEN 100 ELSE SampleSize END AS SampleSize, (-1**(NumRowsw1 / 32768)) --sign * (2**((NumRowsw1/16 mod 2048) - 1023)) --exponent * (1 + ((NumRowsw1 mod 16) * 2**-4) + (NumRowsw2 * 2**-20) + (NumRowsw3 * 2**-36) + (NumRowsw4 * 2**-52)) --fraction as NumRows, (-1**(NumValuesw1 / 32768)) --sign * (2**((NumValuesw1/16 mod 2048) - 1023)) --exponent * (1 + ((NumValuesw1 mod 16) * 2**-4) + (NumValuesw2 * 2**-20) + (NumValuesw3 * 2**-36) + (NumValuesw4 * 2**-52)) --fraction as NumValues, (-1**(NumNullsw1 / 32768)) --sign * (2**((NumNullsw1/16 mod 2048) - 1023)) --exponent * (1 + ((NumNullsw1 mod 16) * 2**-4) + (NumNullsw2 * 2**-20) + (NumNullsw3 * 2**-36) + (NumNullsw4 * 2**-52)) --fraction as NumNulls, (-1**(ModeFreqw1 / 32768)) --sign * (2**((ModeFreqw1/16 mod 2048) - 1023)) --exponent * (1 + ((ModeFreqw1 mod 16) * 2**-4) + (ModeFreqw2 * 2**-20) + (ModeFreqw3 * 2**-36) + (ModeFreqw4 * 2**-52)) --fraction as ModeFreq FROM ( SELECT DatabaseName, TableName, ColumnName, Stats, StatsType, ( (HASHBUCKET (SUBSTR(Stats, 2, 1) || SUBSTR(Stats, 1, 1) (BYTE(4)) ) - 1900 ) * 10000 + (HASHBUCKET ('00'xb || SUBSTR(Stats, 3, 1) (BYTE(4)) ) ) * 100 + (HASHBUCKET ( '00'xb || SUBSTR(Stats, 4, 1) (BYTE(4)) ) ) ) (DATE) AS CollectDate, (CAST( (HASHBUCKET (CAST('00'xb || SUBSTR(Stats, 5, 1) AS BYTE(4)) ) (FORMAT '99:') ) || (HASHBUCKET (CAST('00'xb || SUBSTR(Stats, 6, 1) AS BYTE(4)) ) (FORMAT '99:') ) || (HASHBUCKET (CAST('00'xb || SUBSTR(Stats, 7, 1) AS BYTE(4)) ) (FORMAT '99.') ) || (HASHBUCKET (CAST('00'xb || SUBSTR(Stats, 8, 1) AS BYTE(4)) ) (FORMAT '99') ) AS TIME(2)) ) AS CollectTime, HASHBUCKET ('00'xb || SUBSTR(Stats, 12, 1) (BYTE(4))) AS SampleSize, HASHBUCKET(substr(Stats, 12+8, 1) || substr(Stats, 12+7, 1) (byte(4))) as NumNullsw1, HASHBUCKET(substr(Stats, 12+6, 1) || substr(Stats, 12+5, 1) (byte(4))) as NumNullsw2, HASHBUCKET(substr(Stats, 12+4, 1) || substr(Stats, 12+3, 1) (byte(4))) as NumNullsw3, HASHBUCKET(substr(Stats, 12+2, 1) || substr(Stats, 12+1, 1) (byte(4))) as NumNullsw4, HASHBUCKET(substr(Stats, 40+Offset+8, 1) || substr(Stats, 40+Offset+7, 1) (byte(4))) as ModeFreqw1, HASHBUCKET(substr(Stats, 40+Offset+6, 1) || substr(Stats, 40+Offset+5, 1) (byte(4))) as ModeFreqw2, HASHBUCKET(substr(Stats, 40+Offset+4, 1) || substr(Stats, 40+Offset+3, 1) (byte(4))) as ModeFreqw3, HASHBUCKET(substr(Stats, 40+Offset+2, 1) || substr(Stats, 40+Offset+1, 1) (byte(4))) as ModeFreqw4, HASHBUCKET(substr(Stats, 48+Offset+8, 1) || substr(Stats, 48+Offset+7, 1) (byte(4))) as NumValuesw1, HASHBUCKET(substr(Stats, 48+Offset+6, 1) || substr(Stats, 48+Offset+5, 1) (byte(4))) as NumValuesw2, HASHBUCKET(substr(Stats, 48+Offset+4, 1) || substr(Stats, 48+Offset+3, 1) (byte(4))) as NumValuesw3, HASHBUCKET(substr(Stats, 48+Offset+2, 1) || substr(Stats, 48+Offset+1, 1) (byte(4))) as NumValuesw4, HASHBUCKET(substr(Stats, 56+Offset+8, 1) || substr(Stats, 56+Offset+7, 1) (byte(4))) as NumRowsw1, HASHBUCKET(substr(Stats, 56+Offset+6, 1) || substr(Stats, 56+Offset+5, 1) (byte(4))) as NumRowsw2, HASHBUCKET(substr(Stats, 56+Offset+4, 1) || substr(Stats, 56+Offset+3, 1) (byte(4))) as NumRowsw3, HASHBUCKET(substr(Stats, 56+Offset+2, 1) || substr(Stats, 56+Offset+1, 1) (byte(4))) as NumRowsw4 FROM ( SELECT DatabaseName, TableName, MAX(CASE WHEN ColumnPosition = 1 THEN TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 2 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 3 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 4 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 5 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 6 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 7 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 8 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 9 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 10 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 11 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 12 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 13 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 14 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 15 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 16 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition > 16 THEN ',...' ELSE '' END) AS ColumnName, 'I' AS StatsType, MAX(CASE WHEN SUBSTR(IndexStatistics, 23, 1) = '00'XB THEN 16 ELSE 0 END) AS Offset, MAX(SUBSTR(IndexStatistics, 1, 80)) AS Stats FROM dbc.indexstats GROUP BY DatabaseName, TableName, StatsType, IndexNumber UNION ALL SELECT DatabaseName, TableName, MAX(CASE WHEN ColumnPosition = 1 THEN TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 2 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 3 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 4 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 5 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 6 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 7 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 8 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 9 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 10 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 11 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 12 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 13 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 14 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 15 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 16 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition > 16 THEN ',...' ELSE '' END) AS ColumnName, 'M' AS StatsType, MAX(CASE WHEN SUBSTR(ColumnsStatistics, 23, 1) = '00'XB THEN 16 ELSE 0 END) AS Offset, MAX(SUBSTR(ColumnsStatistics, 1, 80)) AS Stats FROM dbc.MultiColumnStats GROUP BY DatabaseName, TableName, StatsType, StatisticsID UNION ALL SELECT DatabaseName, TableName, ColumnName, 'C' AS StatsType, CASE WHEN SUBSTR(fieldStatistics, 23, 1) = '00'XB THEN 16 ELSE 0 END AS Offset, SUBSTR(fieldstatistics, 1, 80) AS Stats FROM dbc.columnstats ) dt WHERE Stats IS NOT NULL ) dt ORDER BY DatabaseName, TableName, ColumnName ; teradata in/existx/not in/not exists vs Oracle in/existx/not in/mot exists Teradata 执行计划: 1).in 和 exists的执行计划是一样的; 2).EXISTS/NOT EXISTS执行计划的唯一区别是join类型:Inclusion vs. exclusion; 3).IN/NOT IN :如果连接字段可为空,将产生完全不同的执行计划,因为优化器需要考虑空值比较的第三种逻辑值(UNKNOWN) 在不能确保字段非空的情况下,建议使用NOT EXISTS,而不只用NOT IN 个人人为 exists/not exists 要比 in/not in更快: 1).在exists/not exists中,外层查询要问的问题仅仅是:内查询有返回数据行么,并基于此触发exists/not exists 2).在in/not in中,内存查询中的每个返回行都会去同外层查询结果比对。 对于IN,有三种选项: 1).select stuff from A, B where A.acct_no = B.acct_no; 2).select stuff from A where A.acct_no in /* or = ANY */ (select acct_no from B); 3).select stuff from A where exists (select 1 from B where B.acct_no = A.acct_no); 依据我目前的经验,不能在实际操作之前确切说出哪个是最好的(但一般情况下3最好),只能通过查看执行计划来确定 对于NOT IN,有两种选项. 1).select stuff from A where A.acct_no not in /* or <> ALL */ (select acct_no from B); 2).select stuff from A where not exists (select 1 from B where B.acct_no = A.acct_no); 一般来说,我们也可以利用Left Outer Join来完成NOT IN的操作。并且有时这么做可以获得更好的效率。 Oracle: 基本通用的结论: 1).IN适合于外表大而内表小的情况;EXISTS适合于外表小而内表大的情况。 2).用not exists比not in执行的效率要高的多 测试如下: 1.select * from quyu_t a where not exists(select 'x' from quyu_t_his b where a.username=b.username); 2.select * from quyu_t where username not in (select username from quyu_t_his) 执行过程比较: 1.EXISTS的执行流程 select * from t1 where exists ( select null from t2 where y = x ) 可以理解为: for x in ( select * from t1 ) loop if ( exists ( select null from t2 where y = x.x ) then OUTPUT THE RECORD end if end loop 对于in 和 exists的性能区别: 如果子查询得出的结果集记录较少,主查询中的表较大且又有索引时应该用in; 反之如果外层的主查询记录较少,子查询中的表大,又有索引时使用exists。 其实我们区分in和exists主要是造成了驱动顺序的改变(这是性能变化的关键),如果是exists,那么以外层表为驱动表,先被访问,如果是IN,那么先执行子查询,所以我们会以驱动表的快速返回为目标,那么就会考虑到索引及结果集的关系了 另外IN时不对NULL进行处理 如:select 1 from dual where null in (0,1,2,null) 结果为空 2.NOT IN 与NOT EXISTS: NOT EXISTS的执行流程 select *** from rollup R where not exists ( select 'Found' from title T where R.source_id = T.Title_ID); 可以理解为: for x in ( select * from rollup ) loop if ( not exists ( that query ) ) then OUTPUT end if; end; 注意:NOT EXISTS 与 NOT IN 不能完全互相替换,看具体的需求。如果选择的列可以为空,则不能被替换。 例如下面语句,看他们的区别: select x,y from t; x y ------ ------ 1 3 3 1 1 2 1 1 3 1 5 select * from t where x not in (select y from t t2 ) no rows select * from t where not exists (select null from t t2 where t2.y=t.x ) x y ------ ------ 5 NULL 所以要具体需求来决定 对于not in 和 not exists的性能区别: not in :1).只有当子查询中,select 关键字后的字段有not null约束或者有这种暗示时用not in; 2).另外如果主查询中表大,子查询中的表小但是记录多,则应当使用not in,并使用anti hash join. 如果主查询表中记录少,子查询表中记录多,并有索引,可以使用not exists,另外not in最好也可以用/*+ HASH_AJ */或者外连接+is null NOT IN 在基于成本的应用中较好 比如: select * from rollup R where not exists ( select 'Found' from title T where R.source_id = T.Title_ID); 改成(佳) select * from title T, rollup R where R.source_id = T.Title_id(+) and T.Title_id is null; 或者(佳) select /*+ HASH_AJ */ * from rollup R where ource_id NOT IN ( select ource_id from title T where ource_id IS NOT NULL ) teradata CAST 1)、数值型截位或四舍五入 SEL CAST(50500.75 AS INTEGER) AS "Truncated” ,CAST(50500.75 AS DECIMAL(6,0)) AS "Rounded"; Truncated Rounded ----------- ---------- 50500 50501. SEL CAST(6.74 AS DECIMAL(2,1)) AS "Drop Precision” ,CAST(6.75 AS DECIMAL(2,1)) AS "Round Up” ,CAST(6.85 AS DECIMAL(2,1)) AS "Round Down to Even"; Drop Precision Round Up Round Down to Even ----------------- ------------ ------------------------ 6.7 6.8 6.8 2)、字符型截位 SEL 'Teradata'(char(4)) as "Truncate with CAST” ,'Teradata'(char(4),UPPERCASE) as "Truncate and UPPERCASE"; Truncate with CAST Truncate and UPPERCASE ---------------------- ------------------------------ Tera TERA 3)、数值型进行格式变换 Numeric formatting symbols: $ Fixed or floating dollar sign 9 Decimal digit (no zero suppress) Z Zero-suppressed decimal digit , Comma – inserted where specified . Decimal point position - Dash character – inserted where specified / Slash character – inserted where specified % Percent character – inserted where specified SELECT 123 (FORMAT '99999999'), 123 (FORMAT '9(8)') ,000005 (FORMAT 'Z(5)9'), 1234.56 (FORMAT '$$$,$$9.99') ,5 (FORMAT 'Z9%'), 2225551212 (FORMAT '999/999-9999'); 123 123 5 1234.56 5 2225551212. -------- -------- ------ ---------- --- ------------ 00000123 00000123 5 $1,234.56 5% 222/555-1212 将DECIMAL型转成字符型: 不要用这种办法: TRIM(TRAILING '.' FROM TRIM(CAST(P1.Acct_Item_Type_Id AS CHAR(20))) 要采取这种办法: CAST(cast(P1.Acct_Item_Type_Id AS format ‘z(18)') as CHAR(20)) 日期型进行格式变换 Date formatting symbols: Y Year as YYYY/Y4 (2004) or YY (04) M Month as MMMM/M4 (August), MMM/M3 (Aug) or MM (08) D Day as DDD/D3 (day of the year) or DD (day of the month) E Day of the week as EEEE/E4 (Monday) or EEE/E3 (Mon) , Comma – inserted where specified . Decimal point position - Dash character – inserted where specified / Slash character – inserted where specified B Blank position – inserted where specified SELECT CAST(current_date AS FORMAT 'YYYYMMDD') ,CAST(current_date AS FORMAT 'MMMBDD,BYYYY') ,CAST(current_date AS FORMAT 'M4BDD,BY4') ,CAST(current_date AS FORMAT 'YYDDD'); Date Date Date Date ----------- ---------------- ------------------- ----- 20040814 Aug 14, 2004 August 14, 2004 04227 miguelmin 发表于 Teradata权限简写说明 在Teradata数据库中存储的权限相关数据均为简写,为了方便记忆,这里将其各自的意义整理如下: 'AE' --> 'ALTER EXTERNAL PROCEDURE' 'AF' --> 'ALTER FUNCTION' 'AP' --> 'ALTER PROCEDURE' 'AS' --> 'ABORT SESSION' 'CA' --> 'CREATE AUTHORIZATION' 'CD' --> 'CREATE DATABASE' 'CE' --> 'CREATE EXTERNAL PROCEDURE' 'CF' --> 'CREATE FUNCTION' 'CG' --> 'CREATE TRIGGER' 'CM' --> 'CREATE MACRO' 'CO' --> 'CREATE PROFILE' 'CP' --> 'CHECKPOINT' 'CR' --> 'CREATE ROLE' 'CT' --> 'CREATE TABLE' 'CU' --> 'CREATE USER' 'CV' --> 'CREATE VIEW' 'D' --> 'DELETE' 'DA' --> 'DROP AUTHORIZATION' 'DD' --> 'DROP DATABASE' 'DF' --> 'DROP FUNCTION' 'DG' --> 'DROP TRIGGER' 'DM' --> 'DROP MACRO' 'DO' --> 'DROP PROFILE' 'DP' --> 'DUMP' 'DR' --> 'DROP ROLE' 'DT' --> 'DROP TABLE' 'DU' --> 'DROP USER' 'DV' --> 'DROP VIEW' 'E' --> 'EXECUTE (MACRO)' 'EF' --> 'EXECUTE FUNCTION' 'I' --> 'INSERT' 'IX' --> 'INDEX' 'MR' --> 'MONITOR RESOURCE' 'MS' --> 'MONITOR SESSION' 'PC' --> 'CREATE PROCEDURE' 'PD' --> 'DROP PROCEDURE' 'PE' --> 'EXECUTE PROCEDURE' 'R' --> 'RETRIEVE/SELECT' 'RF' --> 'REFERENCE' 'RS' --> 'RESTORE' 'RO' --> 'REPLICATION CONTROL or REPLICATION OVERIDE' 'SR' --> 'SET RESOURCE RATE' 'SS' --> 'SET SESSION RATE' 'U' --> 'UPDATE' 'UM' --> 'UDT METHOD' 'UT' --> 'UDT TYPE' 'UU' --> 'UDT USAGE' Teradata对象简写对照 'A'-->'AGGREGATE UDF' 'B'-->'COMBINED AGGREGATE AND ORDERED ANALYTICAL FUNCTION' 'E'-->'EXTERNAL STORED PROCEDURE' 'F'-->'SCALAR UDF' 'G'-->'TRIGGER' 'H'-->'INSTANCE OR CONSTRUCTOR METHOD' 'I'-->'JOIN INDEX' 'J'-->'JOURNAL' 'M'-->'MACRO' 'N'-->'HASH INDEX' 'P'-->'STORED PROCEDURE' 'Q'-->'QUEUE TABLE' 'R'-->'TABLE FUNCTION' 'S'-->'ORDERED ANALYTICAL FUNCTION' 'T'-->'TABLE' 'U'-->'USER-DEFINED DATA TYPE' 'V'-->'VIEW' 'X'-->'AUTHORIZATION' Teradata Hashing 1:数据分布(Data Distribution) Certain PEs support multiplexor channels and others provide LAN support. 数据一般总是以8-bit ASCII码形式存储在AMP中。如果以EDCDIC形式输入,PE会在哈希化和分布发生前将其转换为ASCII Teradata没有预分配表空间的概念。所有哈希分布的表的行通过哈希值分布到所有的AMP中存储。 (Data)FROM HOST--->(Parsing Engine)CONVERTED AND HASHED--> (BYNET)DISTRIBUTED-->(AMP)FORMATTED-->(Vdisk)STORED 2:哈希化 哈希化是TeradataRDBMS利用的通过主索引值来分布数据行的机制. It takes up to 16 columns of mixed data as input and generates a single 32-bit binary value called a Row Hash. Row Hash是行的逻辑存储位置。这个值的一部分用来定位AMP。 Teradata使用Row Hash值来分布,放置和获取行。 哈希算法是随机的但一致的。虽然连续的PI值不会产生连续的哈希值,同样的主索引值却总是得到相同的Row Hash值。相同Row Hash值的行总是分布在同一个AMP中。 注意:一个计算出的值并不能说是真正的随机的。不同的PI值很少产生相同的Row Hash值。当这情况发生的话,那就是哈希同义或者哈希冲突 Input to the algorithm is the Primary Index (PI) value of a row. The Hashing Algorithm creates a fixed length value from any length input string. The output from the algorithm is the Row Hash: – A 32-bit binary value. – The logical storage location of the row. – Used to identify the AMP of the row. – Table ID + Row Hash is used to locate the Cylinder and Data Block. – Used for distribution, placement, and retrieval of the row. Row Hash uniqueness depends directly on PI uniqueness. 主索引值的唯一性决定了Row Hash值的唯一性,而Row Hash值的唯一性决定了数据分布性。 Good data distribution depends directly on Row Hash uniqueness. The algorithm produces (approximately) random,but consistent, Row Hashes. The same PI value and data type combination always hash identically. Rows with the same Row Hash will always go to the same AMP. Different PI values rarely produce the same Row Hash (Collisions). ---------------------------------------------------------------- 3:多列哈希化 在处理多列组成的索引哈希算法使用加和乘来作为可交换操作 算法将不同的数字数据类型哈希化出同样数字类型值Row Hash PI = (A, B) =Hash(A)*Hash(B)+Hash(A)+Hash(B)=Hash (B) * Hash (A) + Hash (B) + Hash (A) 例子: A B Hash (A) * Hash (B) + Hash (A) + Hash (B) 1 100 1 100 1 100 =201 2 99 2 99 2 99 =299 ----------------------------------------------------------------- 4:主索引哈希映射 主索引值通过哈希算法产生Row Hash.行通过连接层。通过哈希图确定哪个AMP接收该行。哈希图入口由BYNET维护。Hash Map entries are maintained by the BYNET. ------------------------------------------------------------- 5:哈希图 哈希图是确定哪个AMP获取行的机制。它们在系统的每个节点上被复制。有四种哈希图 [1]Current Configuration Primary(指明行存储在哪里) [2]Current Configuration Fallback(指明行的拷贝存储在哪里) [3]Reconfiguration Primary(指明在系统重新配置后行移动到哪里) [4]Reconfiguration Fallback(指明行的拷贝在系统重新配置后移动到哪里) 每个节点上都有四个哈希图 每个哈希图是65536个入口的排列 The Communications Layer Interfaces通过设计好的哈希图检查着所有的进入信息 Only the AMP whose number appears in the referenced Hash Map entry is interrupted. ---------------------------------------------------------------- 6:主哈希图 The Fallback Hash Map IS NOT an exact copy of the Primary Hash Map. Row Hash的前16 bits是DSW(目的地选择词) DSW在哈希图中指向一个入口 The referenced Hash Map entry identifies the AMP for the row hash. “Destination Selection Word” is also referred to as “Bucket Number.” The Primary Hash Map identifies which AMP the first (Primary) copy of a row belongs to. The Communications Layer只使用Row Hash中的DSW来确定哪个AMP接收行。而AMP使用全部32 bit的Row Hash来确定行的逻辑磁盘存储位置 系统具有相同数量的AMP的主哈希图是一样的 Fallback Hash Maps differ due to clustering differences at each site. ---------------------------------------------------------------- 7:数据分布--------Teradata RDBMS for UNIX 潜在问题: Skewing of Hash Bucket Distribution:Hash Bucket 分布的滞后 Row Hash Synonyms (with NUPIs):哈希同义 Row Hash Collisions:哈希冲突 Hashing Numeric Values:哈希化数量值 Redistribution with Joins:连接的重新分布 ---------------------------------------------------------------- 8:哈希关系表达式Hash-Related Expressions TeradataRDBMS包含了对SQL的扩充->Teradata SQL. HASHROW (column(s))--------->Returns the row hash value of a given sequence 返回给定序列的哈希值 eg: SELECT COUNT(*)/ COUNT (DISTINCT(HASHROW (C1,C2))) (FLOAT) FROM T; HASHBUCKET (hashrow)---->The grouping for the specific hash value eg: SELECT HASHBUCKET (HASHROW(C1,C2)), COUNT(*) FROM T GROUP BY 1 ORDER BY 1; HASHAMP (hashbucket)---->The AMP that owns the hash bucket eg: SELECT HASHAMP (HASHBUCKET (HASHROW (C1,C2))), COUNT(*) FROM T3 GROUP BY 1 ORDER BY 1; HASHBAKAMP (hashbucket)-->The fallback AMP that owns the hash bucket ---------------------------------------------------------------------- 9:哈希函数 [1]确定the Hash Buckets 如果你怀疑由于哈希冲突而导致的数据滞后,你能使用HASHBUCKET函数来确定每个hash bucket中的行数量. The hashbucket function requires the HashRow of the columns that make up the primary index or are being considered for the primary index. [2]确定主AMP 一旦你确定由于哈希冲突而导致数据滞后存在,使用hashbucket来确定拥有过量hash bucket的AMP eg:计算选定主索引的NUPI重复和同义冲突的影响: SELECT hashrow (empno, deptno) as “Hash Value”, count (*) FROM employee GROUP BY 1 ORDER BY 2 DESC; Hash Value Count(*) 63524 27 89087 27 (Portion of output cut due to length…) 863 1 28465 1 eg:预测主索引选择后的AMP分布情况 SELECT hashamp (hashbucket (hashrow (empno, deptno))) as “AMP”, Count (*) FROM employee GROUP BY 1 ORDER BY 2 DESC; AMP Count(*) 33 3467 28 3337 (Portion of output cut due to length…) 3 3113 29 3101 ---------------------------------------------------- 10:不平均数据分布的影响 数据行的分布基于主索引的哈希值 在一个袭用中Hash buckets平均分布到节点中的AMP中 歪斜的AMP中的数据分布会影响节点CPU的并行处理效率 Node CPU utilization will reflect these differences because a node is only as fast as its slowest AMP. --------------------------------------------------- 11:数据分布:另外的问题 数据是基于Row Hash code来分布的。Row hash code的hash bucket部分定位了哪个AMP来存储行。 对于连接组合的列,Row hash value是连接列计算出来的. 列的连接只能发生在连接列的Row hash value在同一个AMP上 For joins on columns, row hash value is computed for the join columns. Joins on columns can take place only when the row hash value of the columns are on the same AMP. When join columns are different from the primary index,the rows must be redistributed. This is a necessary overhead. ------------------------------------------------------------- 12:歪斜的连接和聚类处理 如果你的数据没有歪斜但是系统并行处理效率还是很差的话,你应该在连接或者聚类处理过程中寻找歪斜的标识。 poor parallel efficiency occurs when the join field is highly skewed. Skewed Join and Aggregation Processing Note: Skewed processing can also occur with Referential Integrity when the referencing column has skewed demographics, e.g., referenced column is city code. Join:连接 SELECT ... FROM T1, T2 WHERE T1.city = T2.city; Aggregation:聚类 SELECT . . . FROM T1 WHERE . . . GROUP BY city; ------------------------------------------------------ 13:重新配置Reconfiguration Reconfiguration:是对哈希图的重新设置控制并改变系统中AMP数量的过程 系统重新设置哈希图通过重新标识哈希入口来映射新的AMP配置。 This is done in a way that minimizes the number of rows (and Hash Map Entries) reassigned to a new AMP. 在行以后完成以后,主哈希图的重新配置成为了主哈希图的当前配置。并且备份哈希图的重新配置变成了当前备份哈希图的当前配置。 系统创建新的哈希图来容纳新配置 Old and new maps are compared.(新哈希图和老哈希图是可比的) Each AMP reads its rows, and moves only those that hash to a new AMP. It is not necessary to offload and reload data due to a reconfiguration. 不需要数据重组 移动到新AMP中的行的百分比=新的AMP数量/老的+新的AMP的总和 ------------------------------------------------------------ 14:使用主索引 SELECT语句:SELECT * FROM tablename WHERE primaryindex = value(s) | | 分析器输出三部分Table ID,Row Hash和Primary Index value组成的信息. 48 bit Table ID从数据字典中查得。 32 bit Row Hash value由哈希算法产生 Primary Index value来自于SQL请求 | | BYNETs使用(16bit)DSW来定位AMP to interrupt and pass on the message. | | AMP使用Table ID和Row Hash 来确定和定位合适的数据块,然后使用Row Hash和PI值来定位到指定的行.PI值用来区别哈希同义 Only the AMP whose number appears in the referenced Hash Map is interrupted. ------------------------------------------------------------- 15:Row ID 光靠Row Hash并不足以确定出表中的某一行。由于Row Hash是基于主索引值的,所以多行可以有相同的Row Hash.这会导致哈希同义或者NUPI重复。 Row ID=(32 bit)Row Hash+(32 bit)Uniqueness Value For Primary Index retrievals, only the Row Hash and Primary Index values are needed to find the qualifying row(s). 主索引中,使用Row Hash+PI值来确定行 The Uniqueness Value is needed for Secondary Index support. 次索引中,使用Row Hash+Uniqueness Value. Teradata使用Row ID作为次索引的指示器 在插入的时候,系统同时存储了数据值和Row ID ROW ID = ROW HASH and UNIQUENESS VALUE Row Hash: Row Hash is based on Primary Index value. Multiple rows in a table could have the same Row Hash. NUPI duplicates and hash synonyms have the same Row Hash. Uniqueness Value: Type system creates a numeric 32-bit Uniqueness Value. The first row for a Row Hash has a Uniqueness Value of 1. Additional rows have ascending Uniqueness Values. Row IDs determine sort sequence within a Data Block.决定排序顺序 Row IDs support Secondary Index performance. The Row ID makes every row within a table uniquely identifiable. Duplicate Rows: Row ID uniqueness does not imply data uniqueness. Row ID的唯一性并不说明了数据的唯一性 ------------------------------------------------------------ 16:定位行 DSW定位了哈希图的入口 哈希图的入口标识了一个配置中的特定AMP [1]一旦目标AMP被确定了,AMP中的Master Index用来标识出Cylinder Index [2]Cylinder Index标识出目标数据块 [3]通过三段信息(Table ID,Row Hash,PI)来定位到数据块中的行 The AMP accesses its Master Index.-->The Master Index is always memory-resident.(Master Index总是驻留内存的) An entry in the Master Index标识了一个索引列和它的当前版本 The Cylinder Index may or may not be memory resident. An entry in the Cylinder Index标识出数据块 数据块是物理输入输出设备可以驻留内存也可以不驻留内存 A search of the Data Block(s) locates the row(s). AMP MEMORY |---------------------| | Master Index | |_____________________| | Cylinder Index Cache| |_____________________| | Data Cache | |_____________________| ---------------------------------------------------------------- 17:表ID Table ID Table ID是三部分信息的第一部分.是分析器提供的48 bit number Table ID由两个主要成分组成: [1]The first component of the Table ID is the Unique Value. [2]The second component of the Table ID is known as the Subtable ID. Table ID=UNIQUE VALUE (32 bit)+ SUB-TABLE ID(16 bit) The Subtable ID is a 16-bit value that tells the file system which type of blocks to search for. 表,视图和宏的Unique Value都来自于数据字典中的DBC.Next. The Table ID, together with the Row ID, gives Teradata a way to uniquely identify every single row in the entire system. Unique Value for Tables, Views, and Macros comes from DBC.Next dictionary table. 有时候会将Unique Value直接代表Table ID使用 Unique Value一般定义表的类型: 普通数据表 永久日志 Spool文件或者volatile table Subtable ID identifies what part of a table the system is looking at: Table Header Primary data rows Fallback data rows First secondary index primary rows First secondary index fallback rows Second secondary index primary rows Second secondary index fallback rows Third secondary index primary rows and so on… Table ID加上了Row ID使得每个行在系统中都是唯一的 -------------------------------------------------------- 18:Master Index Foramt Master Index是一张显示在该AMP中的所有data cylinder的表 Entries in the Master Index are sorted by the first two columns that show the lowest Table ID and Row Hash that can be found on the cylinder. 不包含数据的Cylinder不在Master Index中显示 Master Index中的关键元素: Header Cylinder Index Descriptors(CIDs) Cylinder Index Descriptor Reference Array(CID Ref.Array) Master Index defines all cylinders in use for the AMP. Three elements are Header, CID, and CID Ref. Array. CIDs reside in the Master Index Heap. CIDs define First Table ID/Row ID and Last Table ID/Row Hash. There is a CID Ref. Array pointer for every CID entry. Each master index entry contains the following data sorted on Table ID and row hash:[1]Lowest Table ID in the cylinder [2]Lowest RowID value on the cylinder (associated with the lowest Table ID) [3]Highest Table ID in the cylinder [4]Highest row hash (not RowID) value on the cylinder (associated with the highest Table ID) [5]Cylinder number 系统使用Table ID和RowID(或者row hash value)来检索master index从而获得cylinder number ----------------------------------------------------------------- 19:Master Index Teradata使用Master Index 来定位包含符合的Table ID和Row Hash的Cylinder. 在每个AMP中每个cylinder只有对应的一个入口。 Cylinders with data appear on the Master Index. Cylinders without data appear on the free Cylinder List. Each index entry identifies its cylinder’s lowest Table ID and Row ID. 每个索引入口标识出它的cylinder`s lowest Table ID和 Row ID. 索引入口存储在Table ID和row hash 中 很多表都有相同cylinder的行 A table may have rows on many cylinders on different PDISKS on an AMP. The Free Cylinder List is sorted by Cylinder Number. ------------------------------------------------------------------ 20:Cylinder Index设计规划 每个cylinder都有它自己的Cylinder Index(CI).CI描述了在cylinder中的一列数据块和空扇区.Table ID和Lowest Row ID序列化CI It is a tool to identify which block a specific row is in. CI包括四个主要部分: Cylinder Index Header Cylinder Index Heap Subtable Reference Array Descriptors Data Block Descriptor Reference Array 一个Cylinder包含着许多不同表的数据块 一个表(或者子表)可能跨越超过一个cylinder An SRD(Subtable Reference Descriptors) exists on a CI for every sub-table that has Data Blocks on the cylinder. Each SRD existing in the CI’s heap is referenced by an SRD Reference Array entry. The DBD Reference Array Indexes to the Data Blocks for a particular sub table. The DBD Reference Array entries are in a descending order sort based on the sub table ID. --------------------------------------------------------------- 21:Cylinder Index There is an entry for each block on this cylinder. Blocks containing rows appear in the Cylinder Index. Those entries are sorted ascending on Table ID and row hash. Blocks without data appear on the Free Block List. The Free Block list is sorted ascending on Sector Number. Only rows belonging to the same table and subtable appear in a block. Blocks belonging to the same table and sub-table can vary in size. ----------------------------------------------------------------- 22:General Block Layout物理块设计 块是Teradata I/O的设备.块中包含1条或者更多的数据行。都属于一个表。 数据块的最大尺寸是:63.5KB 一个数据块包括:The Data Block Header数据块头部 The Row Heap 行堆 The Row Reference Array 行不能被数据块分割。Row Reference Array 被放置在数据块的最后。 [Data Block Format] Header---Row- ---| -Row---Row-- |-->Row Heap --Row---Row-- ---| -Reference Array--Trailer-- 一个数据块是物理输出输入单位(Max size=63.5kb). The block header包括Table ID(6 bytes) 只有同一个表和子表中的行在一个数据块中 块可以从512字节到63.5KB(1 to 127磁盘扇区) Blocks within a table vary in size. The system adjusts block sizes dynamically Data blocks are not chained together Rows are not split across block bundaries Reference Array Pointers are sorted in ascending order on Row ID within blocks Rows are(16 bit) word aligned within a block ------------------------------------------------------------------- 23:General Row Layout ROW LEGNTH+ROW ID+ADDITIONAL OVERHEAD+COLUMN DATA VALUES+ROW REF.ARRAY 2bytes 4bytes+4bytes 2bytes 可变的 2bytes | | 用来确定行在块中的起始位置 Teradata支持可变长的行 行的最大长为64KB 主索引值确定了Row Hash值 系统产生Uniqueness Value 表中的行是可变长的。行最大长度为64KB 在SET表中不允许出现重复行 ----------------------------------------------------------------- 24:AMP Read I/O 概要 Master Index是常驻内存的 如果Cylinder Index不在内存中,则AMP读取Cylinder Index 如果Data Blocks不在内存中,则AMP读取数据块 AMP memory, cache size, and locality of reference determine if either of these steps require physical I/O. Often, the Cylinder Index is memory resident and a Unique Primary Index retrieval requires only one (1) I/O. 整型日期做PPI,sql不走partition CREATE MULTISET TABLE tmp_tmp1,NO FALLBACK , NO BEFORE JOURNAL, NO AFTER JOURNAL, CHECKSUM = DEFAULT ( Date_Id INTEGER TITLE '日期' NOT NULL, ... ) PRIMARY INDEX XIE1tmp_tmp1 ( ...) PARTITION BY RANGE_N(Date_Id BETWEEN 20070101 AND 20101231 EACH 1 ); CASE 1: select count(*) FROM tmp_tmp1 where T1.Date_Id >= 20090401 Explanation: ........ 2) Next, we do an all-AMPs SUM step to aggregate from 10831 partitions of tmp_tmp1with a condition of ("tmp_tmp1.Date_Id >= 20090401"). ............... CASE 2: select count(*) FROM tmp_tmp1 where T1.Date_Id >= cast(CAST(ADD_MONTHS(DATE-1,-1) AS DATE FORMAT 'YYYYMM')||'01' as integer) Explanation: ....... 2) Next, we do an all-AMPs SUM step to aggregate from all partitions of tmp_tmp1with a condition of ( "tmp_tmp1.Date_Id >= ((ADD_MONTHS(((DATE )- 1 ),-1 )(VARCHAR(6), CHARACTER SET UNICODE, NOT CASESPECIFIC, FORMAT 'YYYYMM'))||'01'(INTEGER, FORMAT '-(10)9'))"). .......... 实验得出,where条件中有任何的数据处理,将不会走group partitions,据说这是teradata老版本的bug,似乎老外用的已经没有这个问题了,为什么给我们用旧的呢?鄙视老外一下! 不知道当初为什么要把date_id设计成integer而非date。定义成整型,定义范围内的很多partition牙根不会用到,比如尾数大于30的任何数。上述CASE 1中,实际用到的partition也就40个,但执行计划中为10831个,类型定义为罪魁祸首。 teradata limits SYSTEM Number of Data bases 4.2 Billion Message length 1 MB SQL request length 1 MB Active transactions 2,048 Data Parcel Length 65,104 Parcels in one message 256 SQL title length 60 String constant length 255 Data Format Descriptor Length 30 Error message text in failure parcel 255 Sessions per gateway (Max 1 gateway / Node) 1,200 Sessions per PE 120 Concurrent Utility jobs 15 Vprocs per system 16,384 Vprocs per Node 128 Data capacity per AMP - Unformatted 1.3 Tbyte DATABASE Tables per database 32,000 Journal tables per database 1 Columns per table 2,048 LOB columns per table 32 Columns per View / Spool file 512 Block Size 130,560 Row size (approx) 64,256 Column size (approx) 64,000 LOB size 2 GB Column / Table name length 30 Number of fields per index 62 Secondary / Join indexes per table 32 Table level constraints per table 100 Referential constraints per table 64 Tables that can reference a table 64 Columns in Foreign & Parent key 16 View / Macro nesting levels 8 Rows per table limited by space available SESSION Spool Files 2048 Global Temporary Tables 1000 Volatile Temporary Tables 2000 Parallel steps performed ( If no channels) 20 Number of channels 10 (Redistribution across AMPs uses 4 channels, Non prime Index (without redistribution) uses 2 channels) partition timestamp(0) not use partition Tearadata前辈的问题,记下来: The definition statement is as follows: CREATE MULTISET TABLE PD_DATA_Z.NET_SETT_TICKET_DAILY ( Calling_Nbr VARCHAR(21) NOT NULL, Start_Time TIMESTAMP(0) NOT NULL, ... ) PRIMARY INDEX XIE1NET_SETT_TICKET_DAILY ( Calling_Nbr ) PARTITION BY RANGE_N(CAST((Start_Time ) AS DATE FORMAT 'YYYYMMDD') BETWEEN DATE '1990-01-01' AND DATE '2030-12-31' EACH INTERVAL '1' DAY ); Case 1: explain select * FROM PD_DATA_Z.NET_SETT_TICKET_DAILY WHERE Start_Time = '2009-03-20 00:00:00'; *** Help information returned. 19 rows. *** Total elapsed time was 1 second. Explanation --------------------------------------------------------------------------- 1) First, we lock a distinct PD_DATA_Z."pseudo table" for read on a RowHash to prevent global deadlock for PD_DATA_Z.NET_SETT_TICKET_DAILY. 2) Next, we lock PD_DATA_Z.NET_SETT_TICKET_DAILY for read. 3) We do an all-AMPs RETRIEVE step from a single partition of PD_DATA_Z.NET_SETT_TICKET_DAILY with a condition of ( "PD_DATA_Z.NET_SETT_TICKET_DAILY.Start_Time = TIMESTAMP '2009-03-20 00:00:00'") with a residual condition of ( "PD_DATA_Z.NET_SETT_TICKET_DAILY.Start_Time = TIMESTAMP '2009-03-20 00:00:00'") into Spool 1 (all_amps), which is built locally on the AMPs. The result spool file will not be cached in memory. The size of Spool 1 is estimated with low confidence to be 1,127,057,976 rows. The estimated time for this step is 55 minutes and 24 seconds. 4) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 55 minutes and 24 seconds. Case 2: explain select * FROM PD_DATA_Z.NET_SETT_TICKET_DAILY WHERE Start_Time <= '2009-03-20 00:00:00'; *** Help information returned. 17 rows. *** Total elapsed time was 1 second. Explanation --------------------------------------------------------------------------- 1) First, we lock a distinct PD_DATA_Z."pseudo table" for read on a RowHash to prevent global deadlock for PD_DATA_Z.NET_SETT_TICKET_DAILY. 2) Next, we lock PD_DATA_Z.NET_SETT_TICKET_DAILY for read. 3) We do an all-AMPs RETRIEVE step from PD_DATA_Z.NET_SETT_TICKET_DAILY by way of an all-rows scan with a condition of ("PD_DATA_Z.NET_SETT_TICKET_DAILY.Start_Time <= TIMESTAMP '2009-03-20 00:00:00'") into Spool 1 (group_amps), which is built locally on the AMPs. The input table will not be cached in memory, but it is eligible for synchronized scanning. The result spool file will not be cached in memory. The size of Spool 1 is estimated with no confidence to be 375,685,992 rows. The estimated time for this step is 25 minutes and 8 seconds. 4) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 25 minutes and 8 seconds. Case 3: explain select * FROM PD_DATA_Z.NET_SETT_TICKET_DAILY WHERE cast(Start_Time as date format 'yyyymmdd') = '20090320'; *** Help information returned. 18 rows. *** Total elapsed time was 1 second. Explanation --------------------------------------------------------------------------- 1) First, we lock a distinct PD_DATA_Z."pseudo table" for read on a RowHash to prevent global deadlock for PD_DATA_Z.NET_SETT_TICKET_DAILY. 2) Next, we lock PD_DATA_Z.NET_SETT_TICKET_DAILY for read. 3) We do an all-AMPs RETRIEVE step from PD_DATA_Z.NET_SETT_TICKET_DAILY by way of an all-rows scan with a condition of ("(CAST((PD_DATA_Z.NET_SETT_TICKET_DAILY.Start_Time) AS DATE))= DATE '2009-03-20'") into Spool 1 (group_amps), which is built locally on the AMPs. The input table will not be cached in memory, but it is eligible for synchronized scanning. The result spool file will not be cached in memory. The size of Spool 1 is estimated with no confidence to be 112,705,798 rows. The estimated time for this step is 12 minutes and 12 seconds. 4) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 12 minutes and 12 seconds. Case 4: explain select * FROM PD_DATA_Z.NET_SETT_TICKET_DAILY WHERE cast(Start_Time as date format 'yyyymmdd') <= '20090320'; *** Help information returned. 17 rows. *** Total elapsed time was 1 second. Explanation ------------------------------------------------------------------------- 1) First, we lock a distinct PD_DATA_Z."pseudo table" for read on a RowHash to prevent global deadlock for PD_DATA_Z.NET_SETT_TICKET_DAILY. 2) Next, we lock PD_DATA_Z.NET_SETT_TICKET_DAILY for read. 3) We do an all-AMPs RETRIEVE step from PD_DATA_Z.NET_SETT_TICKET_DAILY by way of an all-rows scan with a condition of ("(CAST((PD_DATA_Z.NET_SETT_TICKET_DAILY.Start_Time) AS DATE))<= DATE '2009-03-20'") into Spool 1 (group_amps), which is built locally on the AMPs. The input table will not be cached in memory, but it is eligible for synchronized scanning. The result spool file will not be cached in memory. The size of Spool 1 is estimated with no confidence to be 375,685,992 rows. The estimated time for this step is 25 minutes and 8 seconds. 4) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 25 minutes and 8 seconds. All these query should use partition elimination, but only Case 1 did. ------------------------------------------------------------------------------------ NTA 1589 describes this issue. This is due to users having potential different timezone differentials into the same data and potentially not selecting data correctly by spanning partitions. PPI 1.Partitioned Primary Indexes (PPI) PPI是什么: • Teardata中的用在物理数据库设计中的索引机制. • 数据行在AMP级聚合成分区. • 分区只是在AMP上将数据行排序. PPI可以提供的优势: • 增加可用性以提高某些特定查询的效率(特定的分区限制查询). • 只有查询涉及到的特定分区需要被访问,以避免全表扫描. PPI如何创建和管理: • PPI的创建和修改很简单(只是在CREATE TABLE语句或ALTER TABLE语句中增加ppi申明就可以了). • 数据在AMP上做分布时,总会按照PPI进行. 2.分区是如何执行的(How is a Partitioning Implemented?) 为数据分布和数据访问提供3个级别的分区. • 行基于Row Hash中的DSW部分做全AMP分布(以及基于PI的访问). • 如果表做了分区,行在AMP级别最初是按照其分区号码(partition number)来排序的. • 在分区内部,数据行按照Row ID来逻辑存储. 单AMP中PPI限制. • 在单个AMP中,行的PPI number可以从1到65535. 在含有PPI的表中,每行可以由其Row Key来唯一确定. • Row Key = Partition # + Row Hash + Uniqueness Value • 含有PPI的表中,分区号将作为数据行的一部分. 3.Primary Index Access (NPPI) TD RDBMS软件处理带有PI的sql请求过程如下: 1).PE(Parsing Engine)创建一个4部分组成的信息:Table ID,Partition #0,the Row Hash,Primary Index value(s). • Table ID,48位,在数据字典中定位. • Row Hash value,32位,由哈希算法生成. • PI value,由提交的SQL得到. • PE通过数据字典得到表是否含有NPPI,若是则将分区数目设为0. 2) MPL(Message Passing Layer)利用Hash值的一部分决定应该将请求发送给哪个AMP: • 利用DSW(Hash的前16位)来定位哈希图(Hash Map)中的bucket. • 此bucket标识PE要发送请求给哪个AMP. 3)AMP利用Table ID及Row Hash去识别和定位数据块,然后利用Row Hash及PI值去定位特定行. • PI值需要带上是为了防止Hash同义; • AMP默认此行处于分区0中. 4.Primary Index Access (PPI) • PE及MPL的处理同上,AMP利用Table ID,PPI及Row Hash去识别和定位数据块,然后利用Row Hash及PI值去定位特定行. • 每个数据行由以下部分组成:Partition No. + Row ID(Row Hash + PI) + Row Data 5.Why Define a PPI? • 可以避免全表扫描,在不使用SI的情况下提高查询效率. • 使用NPPI的表必须进行全表扫描,此时使用NUSI可以提高效率,但是使用NUSI会占用额外的空间和增加维护成本. • 删除单个分区中的大量行数据会非常快. – ALTER TABLE … DROP RANGE … ; – 说明: 快速删除数据只会在表不包含非分区数据,并且没有SI,Join Index或Hash Index时有效. 6.Advantages/Disadvantages of PPI Advantages • 利用分区信息查询会大大增加查询速度. • 区域查询可在没有SI的表没有SI的表上进行. • delete整个分区会非常快. Potential Disadvantages • PPI占用大于2字节的空间,表会占用更多的PERM空间. -- 它也会使得SI字表中受影响行index中的rowid增加2字节大小. • 如果分区列不是PI的一部分的话,利用PI访问的效率会降低. -- 如一个查询中指定了PI值,但是没有指定ppi值,查询将会逐个分区查找所给PI值. • 利用相同的PI去关联无分区的表效率将会降低. -- 如果表是分区的,数据行一样不需要排序,受分区影响,此任务变成几个子关联,每个分区将需要关联一次. • 如果分区字段不是PI的一部分,PI不能被定义唯一性. 7.PPI Considerations • 只有基表(base tables)可以是PPI表. -- Join Indexes,Hash Indexes,Global Temporary Tables, Volatile Tables,Secondary Indexes均不行. -- 注: 从V2R6.1起, 全局和可变临时表均可以分区. -- PPI表可以有SI. -- 可作为定义Join Index和Hash Index的参考. • 一个表可以有65535个分区. -- 分区字段不一定要包含在PI里面,如果没有包含在PI里,则PI不能定义唯一性. -- 分区字段有多种选择. 8.How to Define a PPI 定义partition by有以下限制: • 分区字段必须是标量表达式(INTEGER)或者可以转化成为INTEGER. • 标量表达式可以由多行构成,被称为分区列. • 表达式不能包含aggregate/ordered-analytic/statistical functions,DATE,TIME,ACCOUNT,RANDOM,HASH等函数. • Join Indexes,Hash Indexes and Secondary Indexes上不能定义PPI. • 只有在所有的分区字段均包含在PI中时,PI才可以定义唯一性. • 分区表达式长度限定在8100个字符(它被定义在DBC.TableConstraints中作为默认约束检查) 9.Partitioning with CASE_N and RANGE_N 用CASE_N导致以下结果: • 判断一系列的条件,返回第一个为真的值. • 结果是依照分区条件落到不同分区的数据. • Note: Patterned after SQL CASE expression. 用RANGE_N导致以下结果: • 表达式被判定后映射到一系列分区值中的一个上. • 分区值依递增的顺序罗列,各值不能重复. • 结果是依照分区区间落到不同分区的值. 10.Partitioning with RANGE_N (Example) 定义表如下: CREATE TABLE Claim ( c_claimid INTEGER NOT NULL ,c_custid INTEGER NOT NULL … ,c_claimdate DATE NOT NULL) PRIMARY INDEX (c_claimid) PARTITION BY RANGE_N (c_claimdate BETWEEN DATE '2001-01-01' AND DATE '2007-12-31' EACH INTERVAL '1' MONTH ); 有以下两条记录插入: INSERT INTO Claim VALUES (100039,1009, …, '2001-01-13'); #数据将置于partition #1中 INSERT INTO Claim VALUES (260221,1020, …, '2006-01-07'); #数据将置于partition #61中(12*5+1) 下面的插入语句将导致错误: INERT INTO Claim VALUES (100039, 1009, '1999-12-24', …); INSERT INTO Claim VALUES (100039, 1009, '2008-01-01', …); INSERT INTO Claim VALUES (100039, 1009, NULL, …); 报错信息:5728: Partitioning violation for table TFACT.Claim. Note:c_claimid不能设定唯一性(也就是NUPI),因为c_claimdate不是PI的一部分. 此时要保证PI的唯一性,可以在PI上建USI. -- CREATE UNIQUE INDEX (c_claimid) ON Claim_PPI; 也可以在NUPI列上创建NUSI,NUSI需要考虑的: • Eliminate partition probing • Row-hash locks • 1-AMP operation • Can be used with unique or non-unique PI columns • Must be equality condition • Works with V2R6.0 or later • NUSI Single-AMP operation only supported on PPI tables • Use MultiLoad to load table 比较以下两个分区的定义: ...PARTITION BY RANGE_N (sales_date BETWEEN DATE '2001-01-01' AND DATE '2001-12-31' EACH INTERVAL '7' DAY, DATE '2002-01-01' AND DATE '2002-12-31' EACH INTERVAL '7' DAY, DATE '2003-01-01' AND DATE '2003-12-31' EACH INTERVAL '7' DAY, DATE '2004-01-01' AND DATE '2004-12-31' EACH INTERVAL '7' DAY, DATE '2005-01-01' AND DATE '2005-12-31' EACH INTERVAL '7' DAY DATE '2006-01-01' AND DATE '2006-12-31' EACH INTERVAL '7' DAY, DATE '2007-01-01' AND DATE '2007-12-31' EACH INTERVAL '7' DAY ); 下列写法更简单,但是效率较低: ...PARTITION BY RANGE_N (sales_date BETWEEN DATE '2001-01-01' AND DATE '2007-12-31' EACH INTERVAL '7' DAY); 说明:1)第一种分区方法可能存在分区大小不一致的情况(如最后一个星期不够7天),这样的话数据分布将会倾斜, 第二种情况存在一星期跨越两个年头的情况,如此以来,要删除某年的数据就会比较慢. 2)基于年以月做单位分区就不存在上述问题. 分区可以定义不同的尺度: ...PARTITION BY RANGE_N ( sales_date BETWEEN DATE '2001-01-01' AND DATE '2005-12-31' EACH INTERVAL '7' DAY, DATE '2006-01-01' AND DATE '2006-12-31' EACH INTERVAL '1' DAY); 说明:这类定义不常用,但是在某些情况下可能很有用(如武汉本地网按周分区,其他本地网按月分区) 为了达到更好的分区效果,有时也会对分区字段做标量计算: ... PRIMARY INDEX (store_id, item_id, sales_date) PARTITION BY RANGE_N ( (store_id - 1000) BETWEEN 1 AND 10 EACH 1);; 11.Special Partitions with CASE_N and RANGE_N 下列保留字可以定义特殊分区: NO CASE (or NO RANGE) [OR UNKNOWN] and UNKNOWN ex: PARTITION BY CASE_N (col3 IS NULL, col3 < 10, col3 < 100, NO CASE OR UNKNOWN) CASE_N和RANGE_N在以下情况可以将数据放入特定分区: • 数据没有落在任何CASE或RANGE表达式内. • 计算表达式是UNKNOWN的情况. 如下几种定义的异同: PARTITION BY CASE_N (col3 IS NULL, col3 < 10, col3 < 100, NO CASE OR UNKNOWN); PARTITION BY CASE_N (col3 IS NULL, col3 < 10, col3 < 100, NO CASE, UNKNOWN); PARTITION BY CASE_N (col3 IS NULL, col3 < 10, col3 < 100, NO CASE); PARTITION BY CASE_N (col3 IS NULL, col3 < 10, col3 < 100, UNKNOWN); 第一种,匹配不上(如col3=500)的和无法匹配的(如col3=NULL)放在一个分区,第二种分开放, 第三种碰见unknown数据时报错,第四种遇见匹配不上的会报错. 12.SQL Use of PARTITION Key Word 查看各个分区的记录数: SELECT PARTITION AS "Part #", COUNT(*) AS "Row Count" FROM pd_data.prd_prd_inst_hist GROUP BY 1 ORDER BY 1; 说明:带分区查询时不能使用试图,上面sql改成pv_data_z.prd_prd_inst_hist将报错Invalid Partition field. 12.SQL Use of CASE_N 因为Sys_Calendar.Calendar无UNKNOWN数据,下例会有3个分区: SELECT CASE_N ( day_of_calendar<38350, ,day_of_calendar<38357 ,NO CASE ,UNKNOWN ) AS "Part #", MIN (Calendar_Date) AS "Minimum Date", MAX (Calendar_Date) AS "Maximum Date" FROM Sys_Calendar.Calendar WHERE Calendar_Date BETWEEN DATE '2004-11-28' AND DATE '2005-01-09' GROUP BY "Part #" ORDER BY "Part #"; 13.SQL Use of RANGE_N 下例有两个分区: SELECT RANGE_N ( Calendar_Date BETWEEN DATE '2004-11-28' AND DATE '2004-12-31' EACH INTERVAL '7' DAY, DATE '2005-01-01' AND DATE '2005-01-09' EACH INTERVAL '7' DAY ) AS "Part #", MIN (Calendar_Date) AS "Minimum Date", MAX (Calendar_Date) AS "Maximum Date" FROM Sys_Calendar.Calendar WHERE Calendar_Date BETWEEN DATE '2004-11-28' AND DATE '2005-01-09' GROUP BY "Part #" ORDER BY "Part #"; 14.Using ALTER TABLE with PPI Tables ALTER TABLE有以下限制: • 非空表的PI不能修改. • 非空分区表只限于修改结束点(altering the “ends”). • 如果表带有delete,insert触发器,触发器必须被置为disable. 使用ALTER TABLE的关键是需要直到表里面有没有数据存在: • 如果表是空的,PI和PPI均可以修改. • 如果表含有数据,可以修改分区结束点(DROP RANGE/ADD RANGE,先删除原有结束点,增加新的结束点--如时间的后移) 向去掉2001年的分区,增加一年的分区,并保留去掉分区的数据: ALTER TABLE Sales_History MODIFY PRIMARY INDEX: DROP RANGE BETWEEN DATE '2001-01-01' AND DATE '2001-12-31' EACH INTERVAL '1' MONTH ADD RANGE BETWEEN DATE '2008-01-01' AND DATE '2008-12-31' EACH INTERVAL '1' MONTH WITH INSERT INTO Old_SalesHistory; 说明: • 分区的修改利用增删来实现,增是在分区末尾延伸. • DROP不一定就会DELETE: -- 如果表中包含NO RANGE分区,数据行从删除的分区中转移到NO RANGE分区,这可能很耗时. • 在ALTER TABLE前,备份数据的表(Old_SalesHistory)必须存在. • 必须有落在ADD RANGE内的数据. 试验如下: 第一步: CREATE MULTISET TABLE PD_work.#BIL_ACCT_ITEM_GET_Q ,NO FALLBACK , NO BEFORE JOURNAL, NO AFTER JOURNAL, CHECKSUM = DEFAULT ( Acct_Item_Id DECIMAL(12,0) TITLE '账目标识' NOT NULL, Prd_Inst_Id DECIMAL(12,0) TITLE '产品实例标识' NOT NULL, Pay_Cycle_Id INTEGER TITLE '营收周期标识' NOT NULL, …… Latn_Id INTEGER TITLE '本地网标识' NOT NULL COMPRESS (0 ,1017 )) PRIMARY INDEX XIE1BIL_ACCT_ITEM_GET_Qt ( Acct_Item_Id ) PARTITION BY RANGE_N(Pay_Cycle_Id BETWEEN 200801 AND 200807 EACH 1,NO RANGE ); 第二步: 从PD_data.BIL_ACCT_ITEM_GET_Q表中,200801,200806,200807,200808三个月各导100条数据到 PD_work.#BIL_ACCT_ITEM_GET_Q 第三步: SELECT PARTITION AS "Part #", COUNT(*) AS "Row Count" FROM PD_work.#BIL_ACCT_ITEM_GET_Q GROUP BY 1 ORDER BY 1; 结果: Part # Row Count 1 100 6 100 7 100 8 100 第四步: create table PD_work.#Old_BIL_ACCT_ITEM_GET_Q as PD_work.#BIL_ACCT_ITEM_GET_Q with no data; 第五步: ALTER TABLE PD_work.#BIL_ACCT_ITEM_GET_Q MODIFY PRIMARY INDEX DROP RANGE BETWEEN 200801 AND 200806 EACH 1 ADD RANGE BETWEEN 200808 AND 200812 EACH 1 WITH INSERT INTO PD_work.#Old_BIL_ACCT_ITEM_GET_Q 第六步: SHOW TABLE PD_work.#BIL_ACCT_ITEM_GET_Q 可见表定义变成如下: CREATE MULTISET TABLE PD_work.#BIL_ACCT_ITEM_GET_Q ,NO FALLBACK , NO BEFORE JOURNAL, NO AFTER JOURNAL, CHECKSUM = DEFAULT ( Acct_Item_Id DECIMAL(12,0) TITLE '账目标识' NOT NULL, Prd_Inst_Id DECIMAL(12,0) TITLE '产品实例标识' NOT NULL, Pay_Cycle_Id INTEGER TITLE '营收周期标识' NOT NULL, …… Latn_Id INTEGER TITLE '本地网标识' NOT NULL COMPRESS (0 ,1017 )) PRIMARY INDEX XIE1BIL_ACCT_ITEM_GET_Qt ( Acct_Item_Id ) PARTITION BY RANGE_N(Pay_Cycle_Id BETWEEN 200807 AND 200807 EACH 1 , 200808 AND 200812 EACH 1 , NO RANGE); 第七步: SELECT PARTITION AS "Part #", COUNT(*) AS "Row Count" FROM PD_work.#BIL_ACCT_ITEM_GET_Q GROUP BY 1 ORDER BY 1; 结果: Part # Row Count 1(07) 100 2(08) 100 7(01,06) 200 Teradata spool空间 系统在下列情况下会使用spool空间: • 在session没有结束时,每个用户查询的返回值。因此,为了存储查询的返回值,每个用户都必须有足够的spool空间。 • 大表. • Large volatile tables. These tables require more available spool space. 对系统需要的spool空间的粗略估计: 对于没有fallback,无compression,且工作量适中的系统,spool空间应占系统MAXPERM的30%,或者占系统CURRENTPERM的40%。 下列情况下需要更多的spool空间: • 高压缩比的数据需要更多的spool空间。 -- 压缩可以降低current perm data storage但不是总会降低spool空间,因为spool表并不总是被压缩。 • 批加载一次可以加载数百万条或更多的记录,因此在加载过程中需要spool空间来存放临时数据。 • 高并发需要更大的spool空间。 下列情况需要较少的spool空间: • Fallback会使用双倍的currentperm来存储数据,但不需要双倍的spool空间。因为只有主拷贝会使用spool空间。 • 适时加载,如运用TPump,一次只加载少量的数据,不需要很多的spool空间。 • 好的调优策略可以运用更少的CPU及spool空间。 Spool空间分类: • Volatile Spool 下列情况下Volatile Spool被释放: 1、Transaction completes (unless the table was created with ON COMMIT PRESERVE ROW) 2、Table is dropped manually during the session 3、Session ends 4、Teradata Database resets • Intermediate Spool 1、当spool结果不再需要时,Intermediate Spool空间被释放。 2、可以通过查看执行计划来判断Intermediate Spool空间被释放的时机。 3、intermediate spool做flush操作后的第一步处理描述为“Last Use” • Output Spool Output results are either: 1、Final rows returned in the answer set for a query 2、Rows updated within, inserted into, or deleted from a base table Permanent, temporary, 及spool数据块不能在相同的cylinder中共存 2009 年 08 月 24 日, 星期一 Teradata的数据保护 同其他的数据库系统相比,Teradata数据库提供了更多的数据保护措施。概括起来有以下几种:锁保护、fallback机制、Recovery Journals、Cliques、Transient Journal、ARC工具、Permanent Journal,当然还有最底层的RAID技术,不过RAID技术与Teradata本身无关,这里不做说明。 一、Locks 锁可以防止多用户在同一时刻试图修改同一数据而影响数据的完整性。这种并发控制是通过锁定目标数据来实现的。锁是在请求操作的同时自动加载的,在请求完成后自动释放。另外,用户是可以改变锁的类型。有四种类型锁: 有四种类型的锁 Exclusive --- 防止其他任何类型的并行访问 --- 只应用于DATABASE和TABLE,而不应用于行 --- 限制最高 Write --- 防止其他的Write、Read、Exclusive锁 Read --- 防止其他的Write、Exclusive锁 Access --- 只防止其他的Exclusive 锁可以应用有三个数据库级别 Database --- 应用到数据库中所有的Table/View Table/View --- 应用到Table/View中所有的行 Row hash – 应用到所有具有相同哈希值的行 锁的类型根据SQL命令自动施加 Select –- 施加Read锁 Update --- 施加Write锁 Create Table --- 施加Exclusive锁 DW中最常用的为Access锁 ACCESS锁优点: 在多用户环境中,容许用户快速访问表 对其他的访问产生最小的影响 对访问大量行数据非常有用 ACCESS锁缺点: 在维护表的同时执行ACCESS锁,可能产生不一致或不正确的数据。 二、fall back protection cluster:作为一个独立的fallback活动单元一组AMPs。 Fallback保护是在同一Cluster中其他的AMP上保存一份相同的记录来达到保护数据的目的。如果一个AMP失效,系统就会访问Fallback的记录。Fallback机制在表的级别上提供了AMP的容错能力。在Fallback下,如果一个AMP失效,AMP上的数据仍然可用。用户可以继续使用Fallback的表,而不会丢失任何数据。 建表的时候或建表以后,都可以进行表的Fallback保护定义。如果定义了Fallback保护,它的保护是自动和透明的。 Fallback容许在不同的AMP上有两条相同的记录。如果其中的一个AMP失效,另一个AMP上的记录仍然可用。 这样的好处就是保护了用户的数据,但是也带来了额外的开销。包括两倍的磁盘空间、两倍的Insert、Update、Delete的I/O开销(Fallback选项对于Select操作不需要额外的I/O开销)。 在一个AMP不可用时,一张FALLBACK保护的表仍然可用 一个FALLBACK记录是主记录存在其他AMP上的副本 FALLBACK优点: 在AMP下线期间仍然可以正常访问数据 在磁盘阵列RAID保护的基础上增加了一层数据保护 自动存储 高可用性 FALLBACK开销 两倍的库表存储开销 两倍INSERT、UPDATE、DELETE的I/O开销 注意:如果在同一个CLUSTER中失效两个AMP则RDBMS会停掉 FALLBACK集群是AMP集群,它担当独立的FALLBACK单元。集群对表中行的分布没有影响。FALLBACK行只复制到同一集群中的其他AMP上。 集群的大小可以通过TERADATA控制工具设置,从2个AMP到16个AMP不等(在同一系统中集群的大小是可以不相等的)。例如将8个AMP系统被分成2个集群,每个集群有4个AMP。 如果一个AMP出错,存储在这个AMP上的主行和FALLBACK行是不可以访问的。但是这些行的副本在同一集群中的其他AMP上是可用的。 在一集群中失效一个AMP对其他的集群是没有影响的 如果在集群中失效一个AMP,仍然可以完全访问所有FALLBACK保护的表。 如果在同一集群中失效两个AMP,则整个数据库系统将停掉 当一个AMP失效后,同一集群中的其他AMP出了完成自己以前的工作,还要完成Down掉AMP的工作。集群越大,如果一个AMP Down掉后对其他AMP负载的增加影响就越小。但是集群越大,坏两个AMP的几率就更大,而使整个数据系统停掉的可能性就越大。 FALLBACK保护集群定义为独立的AMP容错单元 集群中的FALLBACK行必须在同一个集群中 集群中一个AMP失效仍然可以访问表中的数据 在同一集群中同时有两个AMP失效,数据库系统停掉 Fallback vs. Non-Fallback Tables Fallback tables Fallback保护表的一个主要好处就是它的可用性和可恢复性。在一个AMP失效的情况下,仍然是所有的数据可以被访问。任何一组中坏掉两个AMP,则系统停掉。在这种情况下,必须手动重起系统。由于Fallback行数据的存在,FALLBACK表很容易恢复。 Non-Fallback Tables 没有Fallback保护的表会受到一个AMP坏掉的影响。表还是可以被访问的,但是只能访问在线的AMP上的数据。单AMP的主索引访问是可能的,但是全表扫描是不可以的。没有受FALLBACK保护的表,如果遇到问题,只能从其他的存储介质上重新导入。 Fallback and Disk Arrays Fallback保护表的应用经常涉及到磁盘阵列保护的问题。通过镜像机制,RAID1为每一个数据行提供了一个备份。Fallback在RAID1之上增加了一层保护,但是某些实施案例这些开销是不可以承受的。 RAID1加Fallback保护提供了最高级别的数据保护。甚至在整个Rank失效的情况下,系统都是可用的。 RAID5加Fallback保护被普遍应用,可以对失去两个磁盘以下的情况进行保护。 因为系统每天要24个小时,每星期要7天地不停运行,推荐Fallback以减少系统的Down机时间。 三、Recovery Journals for Down AMPs(失效AMP的恢复日志) 任何AMP Down掉后,恢复日志会自动启动,来记录AMP在Down掉期间的所有行变化,对失效AMP的insert,update,delete等操作都被应用于同一集群中的Fallback备份。拥有Fallback备份的AMP会将变化的ROW_ID记录到恢复日志中。 这种操作一直进行到Down掉的AMP重新上线为止。做为重新启动的一部分,系统将读取日志,并将变化应用到重起的AMP。在恢复完成后,日志将被丢弃。 恢复日志是: 当AMP DOWN掉后自动启动 由同一FALLBACK组中的其他AMP维护 对系统用户来说是透明 AMP下线期间: 日志是活动的 表可以正常进行UPDATE操作 日志记录DOWN掉AMP变化的行的ROW_ID AMP重起上线 重新存储行的变化到当前的状态 日志恢复完成后被删除 四、Cliques 共享一个磁盘阵列两个或更多的TPA (Trusted Parallel Application)节点称为一个clique. Cliques只对UNIX系统适用。一个CLIQUE可以包含2到8个节点。通常建议每个CLIQUE 4个节点。 AMP虚拟进程可以运行在同一CLIQUE中的任何一个节点上,可以完全访问到磁盘阵列的所有空间。 在同一个CLIQUE中,多节点之间都可以通信。 在MPP系统中的一个CLIQUE中如果坏掉一个节点或磁盘,就算在没有FALLBACK保护情况下,MPP系统仍然可以运行。如果在同一CLIQUE中的不同节点上丢失了AMP虚拟进程,则系统要重起。当节点恢复后,将会把AMP转移给原来的节点。例如7x24系统,FALLBACK选项建议最小化系统磁盘的DOWN机时间。 五、Transient Journal /临时日志 临时日志容许事务可以成功回滚(TXN)。事务在AMP没有收到结束请求是不能提交给数据的,无论是隐式还是显式。事务有可能失败,如果这样,涉及到的数据表就必须恢复到事件前的状态。 临时日志维护一张事务影响的数据行的副本。如果事务失败,事务前的数据副本就会应用到受影响的表上,然后从日志中删除,回滚操作完成。如果事务提交成功,则将在事务提交前的数据副本从日志中删除。 1)由事务前的视图组成 2)在事务TXN失败后自动回滚到事务前的状态 3)是自动和透明的 4)如果TXN失败则前试图应用到表 5)如果TXN完成则丢弃日志 六、备份和恢复数据 ARC工具 ARC工具可以备份和重新加载数据库对象,这样就可以恢复被损坏和丢失的数据。成功从其他介质重新存储对象有几方面:、 在磁盘错误后,恢复没有FALLBACK的表 恢复由于批处理产生不正常状态而被损坏的表 恢复那些被用户不慎删除的TABLE、VIEW和MACRO 各种用户损坏或删除的数据对象 ASF2工具 ASF2以X-WINDOWS的形式提供了编写、执行ARC命令脚本的前端工具。它是为运行在UNIX节点或工作区设计的。 七、Permanent Journal(永久日志) 永久日志的目的是提供一种可以将整个数据库恢复到一个特定的时间点的选择。可以恢复各种不可预料的软硬件错误。永久日志也可以减少全表备份而产生高昂的时间和资源开销。 永久日志是可选的,而且永久日志是由用户自己因特殊需要来创建的。永久日志包括两种前像日志(回滚)和后像日志(前滚)。此外,用户必须自己手工设定是单像还是双像。 多张表或多个数据库可以共享同一永久日志。日志并发获取标准表的维护和查询的镜像。额外的磁盘空间需要计算以保证有充足的磁盘空间可以使用。 日志需要定期放到其他的介质上。这样就减少了全表备份的需要,只需备份变化就可以了。 永久日志: 永久日志可以使数据库恢复到一个特定的时间点,是可选项,用户自定义的,系统维护的日志。 是为了针对意想不到的软硬件灾难而备份数据 可以指定 ---一张或多张表 ---一个或多个数据库 可以建立前像日志使数据库回滚 可以建立后像日志是数据库前滚 表维护期间可以备份表改变的镜像 减少全表备份的需要 提供一种无Fallback表的恢复方式 需要额外的磁盘空间来存储变化的镜像 需要用户干预备份和恢复行为,应该被定期释放来节省磁盘空间 2009 年 08 月 27 日, 星期四 查看历史统计信息收集情况 下面的sql可以得出以往收集统计信息的具体信息: SELECT DatabaseName, TableName, ColumnName, /** stats collected on: 'C' --> Column 'I' --> Index 'M' --> Multiple columns (V2R5) **/ StatsType, /** collect stats date **/ CollectDate, /** collect stats time **/ CollectTime, /** V2R5: sample size used for collect stats**/ CASE SampleSize WHEN 0 THEN 100 ELSE SampleSize END AS SampleSize, /** Row Count Estimated when SampleSize < 100 **/ (-1**(NumRowsw1 / 32768)) --sign * (2**((NumRowsw1/16 mod 2048) - 1023)) --exponent * (1 + ((NumRowsw1 mod 16) * 2**-4) + (NumRowsw2 * 2**-20) + (NumRowsw3 * 2**-36) + (NumRowsw4 * 2**-52)) --fraction as NumRows, /** Distinct Values Estimated when SampleSize < 100 **/ (-1**(NumValuesw1 / 32768)) --sign * (2**((NumValuesw1/16 mod 2048) - 1023)) --exponent * (1 + ((NumValuesw1 mod 16) * 2**-4) + (NumValuesw2 * 2**-20) + (NumValuesw3 * 2**-36) + (NumValuesw4 * 2**-52)) --fraction as NumValues, /** Number of NULLs Estimated when SampleSize < 100 **/ (-1**(NumNullsw1 / 32768)) --sign * (2**((NumNullsw1/16 mod 2048) - 1023)) --exponent * (1 + ((NumNullsw1 mod 16) * 2**-4) + (NumNullsw2 * 2**-20) + (NumNullsw3 * 2**-36) + (NumNullsw4 * 2**-52)) --fraction as NumNulls, /** Maximum number of rows / value, Estimated when SampleSize < 100 **/ (-1**(ModeFreqw1 / 32768)) --sign * (2**((ModeFreqw1/16 mod 2048) - 1023)) --exponent * (1 + ((ModeFreqw1 mod 16) * 2**-4) + (ModeFreqw2 * 2**-20) + (ModeFreqw3 * 2**-36) + (ModeFreqw4 * 2**-52)) --fraction as ModeFreq FROM ( SELECT DatabaseName, TableName, ColumnName, Stats, StatsType, ( (HASHBUCKET (SUBSTR(Stats, 2, 1) || SUBSTR(Stats, 1, 1) (BYTE(4)) ) - 1900 ) * 10000 + (HASHBUCKET ('00'xb || SUBSTR(Stats, 3, 1) (BYTE(4)) ) ) * 100 + (HASHBUCKET ( '00'xb || SUBSTR(Stats, 4, 1) (BYTE(4)) ) ) ) (DATE) AS CollectDate, (CAST( (HASHBUCKET (CAST('00'xb || SUBSTR(Stats, 5, 1) AS BYTE(4)) ) (FORMAT '99:') ) || (HASHBUCKET (CAST('00'xb || SUBSTR(Stats, 6, 1) AS BYTE(4)) ) (FORMAT '99:') ) || (HASHBUCKET (CAST('00'xb || SUBSTR(Stats, 7, 1) AS BYTE(4)) ) (FORMAT '99.') ) || (HASHBUCKET (CAST('00'xb || SUBSTR(Stats, 8, 1) AS BYTE(4)) ) (FORMAT '99') ) AS TIME(2)) ) AS CollectTime, HASHBUCKET ('00'xb || SUBSTR(Stats, 12, 1) (BYTE(4))) AS SampleSize, HASHBUCKET(substr(Stats, 12+8, 1) || substr(Stats, 12+7, 1) (byte(4))) as NumNullsw1, HASHBUCKET(substr(Stats, 12+6, 1) || substr(Stats, 12+5, 1) (byte(4))) as NumNullsw2, HASHBUCKET(substr(Stats, 12+4, 1) || substr(Stats, 12+3, 1) (byte(4))) as NumNullsw3, HASHBUCKET(substr(Stats, 12+2, 1) || substr(Stats, 12+1, 1) (byte(4))) as NumNullsw4, HASHBUCKET(substr(Stats, 40+Offset+8, 1) || substr(Stats, 40+Offset+7, 1) (byte(4))) as ModeFreqw1, HASHBUCKET(substr(Stats, 40+Offset+6, 1) || substr(Stats, 40+Offset+5, 1) (byte(4))) as ModeFreqw2, HASHBUCKET(substr(Stats, 40+Offset+4, 1) || substr(Stats, 40+Offset+3, 1) (byte(4))) as ModeFreqw3, HASHBUCKET(substr(Stats, 40+Offset+2, 1) || substr(Stats, 40+Offset+1, 1) (byte(4))) as ModeFreqw4, HASHBUCKET(substr(Stats, 48+Offset+8, 1) || substr(Stats, 48+Offset+7, 1) (byte(4))) as NumValuesw1, HASHBUCKET(substr(Stats, 48+Offset+6, 1) || substr(Stats, 48+Offset+5, 1) (byte(4))) as NumValuesw2, HASHBUCKET(substr(Stats, 48+Offset+4, 1) || substr(Stats, 48+Offset+3, 1) (byte(4))) as NumValuesw3, HASHBUCKET(substr(Stats, 48+Offset+2, 1) || substr(Stats, 48+Offset+1, 1) (byte(4))) as NumValuesw4, HASHBUCKET(substr(Stats, 56+Offset+8, 1) || substr(Stats, 56+Offset+7, 1) (byte(4))) as NumRowsw1, HASHBUCKET(substr(Stats, 56+Offset+6, 1) || substr(Stats, 56+Offset+5, 1) (byte(4))) as NumRowsw2, HASHBUCKET(substr(Stats, 56+Offset+4, 1) || substr(Stats, 56+Offset+3, 1) (byte(4))) as NumRowsw3, HASHBUCKET(substr(Stats, 56+Offset+2, 1) || substr(Stats, 56+Offset+1, 1) (byte(4))) as NumRowsw4 FROM ( SELECT DatabaseName, TableName, MAX(CASE WHEN ColumnPosition = 1 THEN TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 2 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 3 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 4 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 5 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 6 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 7 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 8 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 9 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 10 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 11 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 12 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 13 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 14 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 15 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 16 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition > 16 THEN ',...' ELSE '' END) AS ColumnName, 'I' AS StatsType, /** Floats are stored after the data and data may be 16 or 32 bytes **/ /** depending on byte[23]**/ MAX(CASE WHEN SUBSTR(IndexStatistics, 23, 1) = '00'XB THEN 16 ELSE 0 END) AS Offset, MAX(SUBSTR(IndexStatistics, 1, 80)) AS Stats FROM dbc.indexstats GROUP BY DatabaseName, TableName, StatsType, IndexNumber UNION ALL SELECT DatabaseName, TableName, MAX(CASE WHEN ColumnPosition = 1 THEN TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 2 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 3 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 4 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 5 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 6 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 7 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 8 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 9 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 10 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 11 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 12 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 13 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 14 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 15 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition = 16 THEN ',' || TRIM(ColumnName) ELSE '' END) || MAX(CASE WHEN ColumnPosition > 16 THEN ',...' ELSE '' END) AS ColumnName, 'M' AS StatsType, /** Floats are stored after the data and data may be 16 or 32 bytes **/ /** depending on byte[23]**/ MAX(CASE WHEN SUBSTR(ColumnsStatistics, 23, 1) = '00'XB THEN 16 ELSE 0 END) AS Offset, MAX(SUBSTR(ColumnsStatistics, 1, 80)) AS Stats FROM dbc.MultiColumnStats GROUP BY DatabaseName, TableName, StatsType, StatisticsID UNION ALL SELECT DatabaseName, TableName, ColumnName, 'C' AS StatsType, /** Floats are stored after the data and data may be 16 or 32 bytes **/ /** depending on byte[23]**/ CASE WHEN SUBSTR(fieldStatistics, 23, 1) = '00'XB THEN 16 ELSE 0 END AS Offset, SUBSTR(fieldstatistics, 1, 80) AS Stats FROM dbc.columnstats ) dt WHERE Stats IS NOT NULL ) dt ORDER BY DatabaseName, TableName, ColumnName ; 2009 年 09 月 18 日, 星期五 时间型分区字段不走分区的解决 表的字段date_id定义为Integer型,并且date_id为本表分区字段。现在有一个处理,需要取表中本月第一条到目前的累积数据。 老虎的sql如下: select * FROM PV_MART_A.FIN_MID_SERV_INC_DAILY A1 WHERE Date_Id <=CAST('20090818' AS INTEGER)-2 and substr(cast(date_id as VARCHAR(8)),1,6)=substr((cast((CAST(CAST('20090818' AS DATE FORMAT 'YYYYMMDD')-2 AS INTEGER)+19000000) as varchar(8))),1,6) ; 因为上述sql条件等号的左右都做了处理,毫无疑问她不会走分区。 修改sql如下: select * FROM PV_MART_A.FIN_MID_SERV_INC_DAILY A1 WHERE Date_Id between (date - extract(day from date) + 1)/1+19000000 and CAST('20090818'AS INTEGER)-2 ; 仍不走分区,但sql改成: select * FROM PV_MART_A.FIN_MID_SERV_INC_DAILY A1 WHERE Date_Id between 20090801 and 20090816; 可以走分区,请问有没有什么办法让其走分区? 经过多次实验,sql改成如下样子会走分区: select * FROM PV_MART_A.FIN_MID_SERV_INC_DAILY A1 WHERE Date_Id between cast(trim(substr('20090818',1,6))||'01' as integer) and CAST('20090818'AS INTEGER)-2 ; 问题解决了,调优目的达到了,但是没有搞明白优化器为什么做这种选择。 闲了好好研究研究优化器的操作。 2009 年 09 月 18 日, 星期五 Teradata spool空间2 系统在下列情况下会使用spool空间: • 在session没有结束时,每个用户查询的返回值。因此,为了存储查询的返回值,每个用户都必须有足够的spool空间。 • 大表. • Large volatile tables. These tables require more available spool space. 对系统需要的spool空间的粗略估计: 对于没有fallback,无compression,且工作量适中的系统,spool空间应占系统MAXPERM的30%,或者占系统CURRENTPERM的40%。 下列情况下需要更多的spool空间: • 高压缩比的数据需要更多的spool空间。 -- 压缩可以降低current perm data storage但不是总会降低spool空间,因为spool表并不总是被压缩。 • 批加载一次可以加载数百万条或更多的记录,因此在加载过程中需要spool空间来存放临时数据。 • 高并发需要更大的spool空间。 下列情况需要较少的spool空间: • Fallback会使用双倍的currentperm来存储数据,但不需要双倍的spool空间。因为只有主拷贝会使用spool空间。 • 适时加载,如运用TPump,一次只加载少量的数据,不需要很多的spool空间。 • 好的调优策略可以运用更少的CPU及spool空间。 Spool空间分类: • Volatile Spool 下列情况下Volatile Spool被释放: 1、Transaction completes (unless the table was created with ON COMMIT PRESERVE ROW) 2、Table is dropped manually during the session 3、Session ends 4、Teradata Database resets • Intermediate Spool 1、当spool结果不再需要时,Intermediate Spool空间被释放。 2、可以通过查看执行计划来判断Intermediate Spool空间被释放的时机。 3、intermediate spool做flush操作后的第一步处理描述为“Last Use” • Output Spool Output results are either: 1、Final rows returned in the answer set for a query 2、Rows updated within, inserted into, or deleted from a base table Permanent, temporary, 及spool数据块不能在相同的cylinder中共存. 另一篇关于spool的说明:http://miguelmin.itpub.net/post/38274/479022 2009 年 10 月 10 日, 星期六 暂时告别Teradata 用Teradata两年多了,通过各位的帮助和自己的努力,学习了不少的Teradata知识,在系统管理和调优方面也取得了一点点的成绩。 为了更广阔的提升空间,我选择离开现有公司,到新的天地去充实自己。在此,我感谢各位大哥,各位朋友给我的鼓励及帮助,也为我的离开给大家带来的不便表达一声歉意。 新的工作或许不能用Teradata了,但我相信,数据库有着绝大多数的共同点,我也可以用好别的数据库,也一样可以成为小环境内技术不错的团队成员。 之前,基本上专注在技术这块,对于新的机会,我希望自己能在业务方面有所提升,提升数据敏感性,提升业务感知,从业务的角度理解技术,用技术来促进业务的开展... 呵呵,有点YY的成分,不过我不希望自己一直走技术路线,有时觉得只懂技术的人会游离在项目的边缘... 离开是为了梦想,但梦想的终点在哪呢?心里有一点小想法,这里借用灰太狼的话:等着,我会回来的...

www.htsjk.Com true http://www.htsjk.com/teradata/35697.html NewsArticle Tera Blog, Teradata SQL调优1.优化过程:依照运行时间,数据量和复杂度来定位瓶颈。查看sql执行计划,判断其合理性。性能监控 ==》目标选取 ==》性能分析 ==》过程优化 ==》运行跟踪(性...
相关文章
    暂无相关文章
评论暂时关闭