Skip to content

Commit e9cd6c3

Browse files
committed
feat: refer to aws s3 implimation, use list pagination
Signed-off-by: LinPr <[email protected]>
1 parent 517ea5d commit e9cd6c3

File tree

22 files changed

+620
-173
lines changed

22 files changed

+620
-173
lines changed

cmd/cp/examples.go

Lines changed: 227 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,227 @@
1+
package cp
2+
3+
const cp_examples = `Example 1: Copying a local file to S3
4+
5+
The following cp command copies a single file to a specified bucket and
6+
key:
7+
8+
aws s3 cp test.txt s3://amzn-s3-demo-bucket/test2.txt
9+
10+
Output:
11+
12+
upload: test.txt to s3://amzn-s3-demo-bucket/test2.txt
13+
14+
Example 2: Copying a local file to S3 with an expiration date
15+
16+
The following cp command copies a single file to a specified bucket and
17+
key that expires at the specified ISO 8601 timestamp:
18+
19+
aws s3 cp test.txt s3://amzn-s3-demo-bucket/test2.txt \
20+
--expires 2014-10-01T20:30:00Z
21+
22+
Output:
23+
24+
upload: test.txt to s3://amzn-s3-demo-bucket/test2.txt
25+
26+
Example 3: Copying a file from S3 to S3
27+
28+
The following cp command copies a single s3 object to a specified
29+
bucket and key:
30+
31+
aws s3 cp s3://amzn-s3-demo-bucket/test.txt s3://amzn-s3-demo-bucket/test2.txt
32+
33+
Output:
34+
35+
copy: s3://amzn-s3-demo-bucket/test.txt to s3://amzn-s3-demo-bucket/test2.txt
36+
37+
Example 4: Copying an S3 object to a local file
38+
39+
The following cp command copies a single object to a specified file lo-
40+
cally:
41+
42+
aws s3 cp s3://amzn-s3-demo-bucket/test.txt test2.txt
43+
44+
Output:
45+
46+
download: s3://amzn-s3-demo-bucket/test.txt to test2.txt
47+
48+
Example 5: Copying an S3 object from one bucket to another
49+
50+
The following cp command copies a single object to a specified bucket
51+
while retaining its original name:
52+
53+
aws s3 cp s3://amzn-s3-demo-bucket/test.txt s3://amzn-s3-demo-bucket2/
54+
55+
Output:
56+
57+
copy: s3://amzn-s3-demo-bucket/test.txt to s3://amzn-s3-demo-bucket2/test.txt
58+
59+
Example 6: Recursively copying S3 objects to a local directory
60+
61+
When passed with the parameter --recursive, the following cp command
62+
recursively copies all objects under a specified prefix and bucket to a
63+
specified directory. In this example, the bucket amzn-s3-demo-bucket
64+
has the objects test1.txt and test2.txt:
65+
66+
aws s3 cp s3://amzn-s3-demo-bucket . \
67+
--recursive
68+
69+
Output:
70+
71+
download: s3://amzn-s3-demo-bucket/test1.txt to test1.txt
72+
download: s3://amzn-s3-demo-bucket/test2.txt to test2.txt
73+
74+
Example 7: Recursively copying local files to S3
75+
76+
When passed with the parameter --recursive, the following cp command
77+
recursively copies all files under a specified directory to a specified
78+
bucket and prefix while excluding some files by using an --exclude pa-
79+
rameter. In this example, the directory myDir has the files test1.txt
80+
and test2.jpg:
81+
82+
aws s3 cp myDir s3://amzn-s3-demo-bucket/ \
83+
--recursive \
84+
--exclude "*.jpg"
85+
86+
Output:
87+
88+
upload: myDir/test1.txt to s3://amzn-s3-demo-bucket/test1.txt
89+
90+
Example 8: Recursively copying S3 objects to another bucket
91+
92+
When passed with the parameter --recursive, the following cp command
93+
recursively copies all objects under a specified bucket to another
94+
bucket while excluding some objects by using an --exclude parameter.
95+
In this example, the bucket amzn-s3-demo-bucket has the objects
96+
test1.txt and another/test1.txt:
97+
98+
aws s3 cp s3://amzn-s3-demo-bucket/ s3://amzn-s3-demo-bucket2/ \
99+
--recursive \
100+
--exclude "another/*"
101+
102+
Output:
103+
104+
copy: s3://amzn-s3-demo-bucket/test1.txt to s3://amzn-s3-demo-bucket2/test1.txt
105+
106+
You can combine --exclude and --include options to copy only objects
107+
that match a pattern, excluding all others:
108+
109+
aws s3 cp s3://amzn-s3-demo-bucket/logs/ s3://amzn-s3-demo-bucket2/logs/ \
110+
--recursive \
111+
--exclude "*" \
112+
--include "*.log"
113+
114+
Output:
115+
116+
copy: s3://amzn-s3-demo-bucket/logs/test/test.log to s3://amzn-s3-demo-bucket2/logs/test/test.log
117+
copy: s3://amzn-s3-demo-bucket/logs/test3.log to s3://amzn-s3-demo-bucket2/logs/test3.log
118+
119+
Example 9: Setting the Access Control List (ACL) while copying an S3
120+
object
121+
122+
The following cp command copies a single object to a specified bucket
123+
and key while setting the ACL to public-read-write:
124+
125+
aws s3 cp s3://amzn-s3-demo-bucket/test.txt s3://amzn-s3-demo-bucket/test2.txt \
126+
--acl public-read-write
127+
128+
Output:
129+
130+
copy: s3://amzn-s3-demo-bucket/test.txt to s3://amzn-s3-demo-bucket/test2.txt
131+
132+
Note that if you're using the --acl option, ensure that any associated
133+
IAM policies include the "s3:PutObjectAcl" action:
134+
135+
aws iam get-user-policy \
136+
--user-name myuser \
137+
--policy-name mypolicy
138+
139+
Output:
140+
141+
{
142+
"UserName": "myuser",
143+
"PolicyName": "mypolicy",
144+
"PolicyDocument": {
145+
"Version": "2012-10-17",
146+
"Statement": [
147+
{
148+
"Action": [
149+
"s3:PutObject",
150+
"s3:PutObjectAcl"
151+
],
152+
"Resource": [
153+
"arn:aws:s3:::amzn-s3-demo-bucket/*"
154+
],
155+
"Effect": "Allow",
156+
"Sid": "Stmt1234567891234"
157+
}
158+
]
159+
}
160+
}
161+
162+
Example 10: Granting permissions for an S3 object
163+
164+
The following cp command illustrates the use of the --grants option to
165+
grant read access to all users identified by URI and full control to a
166+
specific user identified by their Canonical ID:
167+
168+
aws s3 cp file.txt s3://amzn-s3-demo-bucket/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=id=79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be
169+
170+
Output:
171+
172+
upload: file.txt to s3://amzn-s3-demo-bucket/file.txt
173+
174+
Example 11: Uploading a local file stream to S3
175+
176+
WARNING:
177+
PowerShell may alter the encoding of or add a CRLF to piped input.
178+
179+
The following cp command uploads a local file stream from standard in-
180+
put to a specified bucket and key:
181+
182+
aws s3 cp - s3://amzn-s3-demo-bucket/stream.txt
183+
184+
Example 12: Uploading a local file stream that is larger than 50GB to
185+
S3
186+
187+
The following cp command uploads a 51GB local file stream from standard
188+
input to a specified bucket and key. The --expected-size option must
189+
be provided, or the upload may fail when it reaches the default part
190+
limit of 10,000:
191+
192+
aws s3 cp - s3://amzn-s3-demo-bucket/stream.txt --expected-size 54760833024
193+
194+
Example 13: Downloading an S3 object as a local file stream
195+
196+
WARNING:
197+
PowerShell may alter the encoding of or add a CRLF to piped or redi-
198+
rected output.
199+
200+
The following cp command downloads an S3 object locally as a stream to
201+
standard output. Downloading as a stream is not currently compatible
202+
with the --recursive parameter:
203+
204+
aws s3 cp s3://amzn-s3-demo-bucket/stream.txt -
205+
206+
Example 14: Uploading to an S3 access point
207+
208+
The following cp command uploads a single file (mydoc.txt) to the ac-
209+
cess point (myaccesspoint) at the key (mykey):
210+
211+
aws s3 cp mydoc.txt s3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/mykey
212+
213+
Output:
214+
215+
upload: mydoc.txt to s3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/mykey
216+
217+
Example 15: Downloading from an S3 access point
218+
219+
The following cp command downloads a single object (mykey) from the ac-
220+
cess point (myaccesspoint) to the local file (mydoc.txt):
221+
222+
aws s3 cp s3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/mykey mydoc.txt
223+
224+
Output:
225+
226+
download: s3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/mykey to mydoc.txt
227+
`

cmd/du/du.go

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,8 @@ type Args struct {
4646
S3Uri string `validate:"omitempty"`
4747
}
4848
type Flags struct {
49-
DryRun bool `json:"DryRun" yaml:"DryRun"`
49+
DryRun bool `json:"DryRun" yaml:"DryRun"`
50+
Region *string `json:"Region" yaml:"Region"`
5051
}
5152

5253
type Options struct {
@@ -76,12 +77,16 @@ func (o *Options) run() error {
7677
fmt.Fprintf(os.Stdout, "options: %s\n", string(j))
7778
// return nil
7879

79-
cli, err := s3store.NewS3Client(context.TODO())
80+
opt := s3store.S3Option{
81+
Region: *o.Region,
82+
}
83+
84+
cli, err := s3store.NewS3Client(context.TODO(), opt)
8085
if err != nil {
8186
return err
8287
}
8388

84-
parsedUri, err := uri.ParseS3Url(o.S3Uri)
89+
parsedUri, err := uri.ParseS3Uri(o.S3Uri)
8590
if err != nil {
8691
return err
8792
}
@@ -93,12 +98,13 @@ func (o *Options) run() error {
9398
}
9499

95100
func listObjects(cli *s3store.S3Store, bucket, key string) error {
96-
objs, err := cli.ListObjects(context.TODO(), bucket, key)
101+
objs, err := cli.ListObjectsWithPagination(context.TODO(), bucket, key)
97102
if err != nil {
98103
return err
99104
}
100105
var size int64
101-
for _, obj := range objs.Contents {
106+
107+
for _, obj := range objs {
102108
size += *obj.Size
103109
}
104110
unit := "bytes"

cmd/get/get.go

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -78,12 +78,12 @@ func (o *Options) run() error {
7878
fmt.Fprintf(os.Stdout, "options: %s\n", string(j))
7979
// return nil
8080

81-
parsedUri, err := uri.ParseS3Url(o.S3Uri)
81+
parsedUri, err := uri.ParseS3Uri(o.S3Uri)
8282
if err != nil {
8383
return err
8484
}
85-
86-
store, err := storage.NewStorage(context.TODO())
85+
opt := storage.StorageOption{}
86+
store, err := storage.NewStorage(context.TODO(), opt)
8787
if err != nil {
8888
return err
8989
}

cmd/ls/examples.go

Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
package ls
2+
3+
const ls_examples = `Example 1: Listing all user owned buckets
4+
5+
The following ls command lists all of the bucket owned by the user. In
6+
this example, the user owns the buckets amzn-s3-demo-bucket and
7+
amzn-s3-demo-bucket2. The timestamp is the date the bucket was cre-
8+
ated, shown in your machine's time zone. This date can change when
9+
making changes to your bucket, such as editing its bucket policy. Note
10+
if s3:// is used for the path argument <S3Uri>, it will list all of
11+
the buckets as well.
12+
13+
aws s3 ls
14+
15+
Output:
16+
17+
2013-07-11 17:08:50 amzn-s3-demo-bucket
18+
2013-07-24 14:55:44 amzn-s3-demo-bucket2
19+
20+
Example 2: Listing all prefixes and objects in a bucket
21+
22+
The following ls command lists objects and common prefixes under a
23+
specified bucket and prefix. In this example, the user owns the bucket
24+
amzn-s3-demo-bucket with the objects test.txt and somePrefix/test.txt.
25+
The LastWriteTime and Length are arbitrary. Note that since the ls com-
26+
mand has no interaction with the local filesystem, the s3:// URI scheme
27+
is not required to resolve ambiguity and may be omitted.
28+
29+
aws s3 ls s3://amzn-s3-demo-bucket
30+
31+
Output:
32+
33+
PRE somePrefix/
34+
2013-07-25 17:06:27 88 test.txt
35+
36+
Example 3: Listing all prefixes and objects in a specific bucket and
37+
prefix
38+
39+
The following ls command lists objects and common prefixes under a
40+
specified bucket and prefix. However, there are no objects nor common
41+
prefixes under the specified bucket and prefix.
42+
43+
aws s3 ls s3://amzn-s3-demo-bucket/noExistPrefix
44+
45+
Output:
46+
47+
None
48+
49+
Example 4: Recursively listing all prefixes and objects in a bucket
50+
51+
The following ls command will recursively list objects in a bucket.
52+
Rather than showing PRE dirname/ in the output, all the content in a
53+
bucket will be listed in order.
54+
55+
aws s3 ls s3://amzn-s3-demo-bucket \
56+
--recursive
57+
58+
Output:
59+
60+
2013-09-02 21:37:53 10 a.txt
61+
2013-09-02 21:37:53 2863288 foo.zip
62+
2013-09-02 21:32:57 23 foo/bar/.baz/a
63+
2013-09-02 21:32:58 41 foo/bar/.baz/b
64+
2013-09-02 21:32:57 281 foo/bar/.baz/c
65+
2013-09-02 21:32:57 73 foo/bar/.baz/d
66+
2013-09-02 21:32:57 452 foo/bar/.baz/e
67+
2013-09-02 21:32:57 896 foo/bar/.baz/hooks/bar
68+
2013-09-02 21:32:57 189 foo/bar/.baz/hooks/foo
69+
2013-09-02 21:32:57 398 z.txt
70+
71+
Example 5: Summarizing all prefixes and objects in a bucket
72+
73+
The following ls command demonstrates the same command using the --hu-
74+
man-readable and --summarize options. --human-readable displays file
75+
size in Bytes/MiB/KiB/GiB/TiB/PiB/EiB. --summarize displays the total
76+
number of objects and total size at the end of the result listing:
77+
78+
aws s3 ls s3://amzn-s3-demo-bucket \
79+
--recursive \
80+
--human-readable \
81+
--summarize
82+
83+
Output:
84+
85+
2013-09-02 21:37:53 10 Bytes a.txt
86+
2013-09-02 21:37:53 2.9 MiB foo.zip
87+
2013-09-02 21:32:57 23 Bytes foo/bar/.baz/a
88+
2013-09-02 21:32:58 41 Bytes foo/bar/.baz/b
89+
2013-09-02 21:32:57 281 Bytes foo/bar/.baz/c
90+
2013-09-02 21:32:57 73 Bytes foo/bar/.baz/d
91+
2013-09-02 21:32:57 452 Bytes foo/bar/.baz/e
92+
2013-09-02 21:32:57 896 Bytes foo/bar/.baz/hooks/bar
93+
2013-09-02 21:32:57 189 Bytes foo/bar/.baz/hooks/foo
94+
2013-09-02 21:32:57 398 Bytes z.txt
95+
96+
Total Objects: 10
97+
Total Size: 2.9 MiB
98+
99+
Example 6: Listing from an S3 access point
100+
101+
The following ls command list objects from access point (myaccess-
102+
point):
103+
104+
aws s3 ls s3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/
105+
106+
Output:
107+
108+
PRE somePrefix/
109+
2013-07-25 17:06:27 88 test.txt
110+
`

0 commit comments

Comments
 (0)