1 Respuesta
- Más nuevo
- Más votos
- Más comentarios
0
First, create an example table:
create table mytable (column1 varchar(10), column2 varchar(10), column3 varchar(10));
The insert is pretty straight forward:
insert into mytable
(column1, column2, column3)
values
('value1', 'value2', 'value3'),
('value3', 'value4', 'value5');
However, with Redshift, you will typically be inserting many more rows at a time rather than 1 or 2 rows like you would with an OLTP database like PostgreSQL, MySQL, Oracle, etc.
So, create a file that is pipe delimited like this:
value1|value2|value3
value4|value5|value6
Upload this to s3:
aws s3 cp example2.txt s3://mybucket/
Copy documentation: https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html
And the upload command with COPY (which is the preferred method):
copy mytable from 's3://mybucket/example2.txt' iam_role default;
You also asked for a procedure to do this insert. Stored Procedure documentation: https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_PROCEDURE.html
Here is a quick example:
create or replace procedure pr_example2() as
$$
BEGIN
insert into mytable
(column1, column2, column3)
values
('value1', 'value2', 'value3'),
('value3', 'value4', 'value5');
END;
$$
LANGUAGE plpgsql;
And to execute the procedure, you "call" it:
call pr_example2();
Contenido relevante
- OFICIAL DE AWSActualizada hace 2 años
- OFICIAL DE AWSActualizada hace un año
- OFICIAL DE AWSActualizada hace 3 años
- OFICIAL DE AWSActualizada hace 2 años