You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/advanced-guide/handling-file/page.md
+86-18Lines changed: 86 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,10 @@ GoFr simplifies the complexity of working with different file stores by offering
6
6
7
7
By default, local file-store is initialized and user can access it from the context.
8
8
9
-
GoFr also supports FTP/SFTP file-store. Developers can also connect and use their AWS S3 bucket as a file-store. The file-store can be initialized as follows:
9
+
GoFr also supports FTP/SFTP file-store. Developers can also connect and use their AWS S3 bucket or Google Cloud Storage (GCS) bucket as a file-store. The file-store can be initialized as follows:
10
10
11
11
### FTP file-store
12
+
12
13
```go
13
14
package main
14
15
@@ -34,6 +35,7 @@ func main() {
34
35
```
35
36
36
37
### SFTP file-store
38
+
37
39
```go
38
40
package main
39
41
@@ -60,8 +62,7 @@ func main() {
60
62
### AWS S3 Bucket as File-Store
61
63
62
64
To run S3 File-Store locally we can use localstack,
63
-
``docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack``
64
-
65
+
`docker run --rm -it -p 4566:4566 -p 4510-4559:4510-4559 localstack/localstack`
65
66
66
67
```go
67
68
package main
@@ -90,17 +91,68 @@ func main() {
90
91
app.Run()
91
92
}
92
93
```
93
-
> Note: The current implementation supports handling only one bucket at a time,
94
-
> as shown in the example with `gofr-bucket-2`. Bucket switching mid-operation is not supported.
94
+
95
+
> Note: The current implementation supports handling only one bucket at a time,
96
+
> as shown in the example with `gofr-bucket-2`. Bucket switching mid-operation is not supported.
97
+
98
+
### Google Cloud Storage (GCS) Bucket as File-Store
99
+
100
+
To run GCS File-Store locally we can use fake-gcs-server:
101
+
`docker run -it --rm -p 4443:4443 -e STORAGE_EMULATOR_HOST=0.0.0.0:4443 fsouza/fake-gcs-server:latest`
log.Fatalf("Failed to read credentials file: %v", err)
136
+
}
137
+
return data
138
+
}
139
+
140
+
```
141
+
142
+
> **Note:** When connecting to the actual GCS service, authentication can be provided via CredentialsJSON or the GOOGLE_APPLICATION_CREDENTIALS environment variable.
143
+
> When using fake-gcs-server, authentication is not required.
144
+
> Currently supports one bucket per file-store instance.
To switch to another directory in same parent directory
175
+
122
176
```go
123
177
currentDir, err:= ctx.File.Chdir("../my_dir2")
124
178
```
125
179
126
180
To switch to a subfolder of the current directory
181
+
127
182
```go
128
183
currentDir, err:= ctx.File.Chdir("sub_dir")
129
184
```
185
+
130
186
> Note: This method attempts to change the directory, but S3's flat structure and fixed bucket
131
-
> make this operation inapplicable.
187
+
> make this operation inapplicable. Similarly, GCS uses a flat structure where directories are simulated through object prefixes.
132
188
133
189
### Read a Directory
134
190
135
-
The ReadDir function reads the specified directory and returns a sorted list of its entries as FileInfo objects. Each FileInfo object provides access to its associated methods, eliminating the need for additional stat calls.
191
+
The ReadDir function reads the specified directory and returns a sorted list of its entries as FileInfo objects. Each FileInfo object provides access to its associated methods, eliminating the need for additional stat calls.
136
192
137
193
If an error occurs during the read operation, ReadDir returns the successfully read entries up to the point of the error along with the error itself. Passing "." as the directory argument returns the entries for the current directory.
194
+
138
195
```go
139
196
entries, err:= ctx.File.ReadDir("../testdir")
140
197
@@ -143,12 +200,13 @@ for _, entry := range entries {
143
200
144
201
if entry.IsDir() {
145
202
entryType = "Dir"
146
-
}
203
+
}
147
204
148
205
fmt.Printf("%v: %v Size: %v Last Modified Time : %v\n", entryType, entry.Name(), entry.Size(), entry.ModTime())
149
206
}
150
207
```
151
-
> Note: In S3, directories are represented as prefixes of file keys. This method retrieves file
208
+
209
+
> Note: In S3 and GCS, directories are represented as prefixes of file keys/object names. This method retrieves file
152
210
> entries only from the immediate level within the specified directory.
GoFr support reading CSV/JSON/TEXT files line by line.
167
226
168
227
```go
169
228
reader, err:= file.ReadAll()
170
229
171
230
for reader.Next() {
172
231
varbstring
173
-
232
+
174
233
// For reading CSV/TEXT files user need to pass pointer to string to SCAN.
175
234
// In case of JSON user should pass structs with JSON tags as defined in encoding/json.
176
235
err = reader.Scan(&b)
@@ -179,10 +238,12 @@ for reader.Next() {
179
238
}
180
239
```
181
240
182
-
183
241
### Opening and Reading Content from a File
242
+
184
243
To open a file with default settings, use the `Open` command, which provides read and seek permissions only. For write permissions, use `OpenFile` with the appropriate file modes.
244
+
185
245
> Note: In FTP, file permissions are not differentiated; both `Open` and `OpenFile` allow all file operations regardless of specified permissions.
246
+
186
247
```go
187
248
csvFile, _:= ctx.File.Open("my_file.csv")
188
249
@@ -205,6 +266,7 @@ if err != nil {
205
266
### Getting Information of the file/directory
206
267
207
268
Stat retrieves details of a file or directory, including its name, size, last modified time, and type (such as whether it is a file or folder)
269
+
208
270
```go
209
271
file, _:= ctx.File.Stat("my_file.text")
210
272
entryType:="File"
@@ -215,10 +277,12 @@ if entry.IsDir() {
215
277
216
278
fmt.Printf("%v: %v Size: %v Last Modified Time : %v\n", entryType, entry.Name(), entry.Size(), entry.ModTime())
217
279
```
218
-
>Note: In S3:
280
+
281
+
> Note: In S3 and GCS:
282
+
>
219
283
> - Names without a file extension are treated as directories by default.
220
-
> - Names starting with "0" are interpreted as binary files, with the "0" prefix removed.
221
-
>
284
+
> - Names starting with "0" are interpreted as binary files, with the "0" prefix removed (S3 specific behavior).
285
+
>
222
286
> For directories, the method calculates the total size of all contained objects and returns the most recent modification time. For files, it directly returns the file's size and last modified time.
> Note: Currently, the S3 package supports the deletion of unversioned files from general-purpose buckets only. Directory buckets and versioned files are not supported for deletion by this method.
301
+
302
+
> Note: Currently, the S3 package supports the deletion of unversioned files from general-purpose buckets only. Directory buckets and versioned files are not supported for deletion by this method. GCS supports deletion of both files and empty directories.
303
+
238
304
```go
239
305
err:= ctx.File.Remove("my_dir")
240
306
```
241
307
242
308
The `RemoveAll` command deletes all subdirectories as well. If you delete the current working directory, such as "../currentDir", the working directory will be reset to its parent directory.
243
-
> Note: In S3, RemoveAll only supports deleting directories and will return an error if a file path (as indicated by a file extension) is provided.
309
+
310
+
> Note: In S3 and GCS, RemoveAll only supports deleting directories and will return an error if a file path (as indicated by a file extension) is provided for S3. GCS handles both files and directories.
311
+
244
312
```go
245
313
err:= ctx.File.RemoveAll("my_dir/my_text")
246
314
```
247
315
248
-
> GoFr supports relative paths, allowing locations to be referenced relative to the current working directory. However, since S3 uses
249
-
> a flat file structure, all methods require a full path relative to the S3 bucket.
316
+
> GoFr supports relative paths, allowing locations to be referenced relative to the current working directory. However, since S3 and GCS use
317
+
> a flat file structure, all methods require a full path relative to the bucket.
250
318
251
319
> Errors have been skipped in the example to focus on the core logic, it is recommended to handle all the errors.
0 commit comments