Skip to content

Commit 367c240

Browse files
bjorn-helgaasChristoph Hellwig
authored andcommitted
nvme: fix various comment typos
Fix typos in comments. Signed-off-by: Bjorn Helgaas <[email protected]> Reviewed-by: Chaitanya Kulkarni <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
1 parent b6160cd commit 367c240

File tree

4 files changed

+9
-9
lines changed

4 files changed

+9
-9
lines changed

drivers/nvme/host/fc.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1363,7 +1363,7 @@ nvme_fc_disconnect_assoc_done(struct nvmefc_ls_req *lsreq, int status)
13631363
* down, and the related FC-NVME Association ID and Connection IDs
13641364
* become invalid.
13651365
*
1366-
* The behavior of the fc-nvme initiator is such that it's
1366+
* The behavior of the fc-nvme initiator is such that its
13671367
* understanding of the association and connections will implicitly
13681368
* be torn down. The action is implicit as it may be due to a loss of
13691369
* connectivity with the fc-nvme target, so you may never get a
@@ -2777,7 +2777,7 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
27772777
* as WRITE ZEROES will return a non-zero rq payload_bytes yet
27782778
* there is no actual payload to be transferred.
27792779
* To get it right, key data transmission on there being 1 or
2780-
* more physical segments in the sg list. If there is no
2780+
* more physical segments in the sg list. If there are no
27812781
* physical segments, there is no payload.
27822782
*/
27832783
if (blk_rq_nr_phys_segments(rq)) {

drivers/nvme/host/tcp.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2179,7 +2179,7 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
21792179

21802180
/*
21812181
* Only start IO queues for which we have allocated the tagset
2182-
* and limitted it to the available queues. On reconnects, the
2182+
* and limited it to the available queues. On reconnects, the
21832183
* queue number might have changed.
21842184
*/
21852185
nr_queues = min(ctrl->tagset->nr_hw_queues + 1, ctrl->queue_count);

drivers/nvme/target/fc.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -459,7 +459,7 @@ nvmet_fc_disconnect_assoc_done(struct nvmefc_ls_req *lsreq, int status)
459459
* down, and the related FC-NVME Association ID and Connection IDs
460460
* become invalid.
461461
*
462-
* The behavior of the fc-nvme target is such that it's
462+
* The behavior of the fc-nvme target is such that its
463463
* understanding of the association and connections will implicitly
464464
* be torn down. The action is implicit as it may be due to a loss of
465465
* connectivity with the fc-nvme host, so the target may never get a
@@ -2313,7 +2313,7 @@ nvmet_fc_transfer_fcp_data(struct nvmet_fc_tgtport *tgtport,
23132313
ret = tgtport->ops->fcp_op(&tgtport->fc_target_port, fod->fcpreq);
23142314
if (ret) {
23152315
/*
2316-
* should be ok to set w/o lock as its in the thread of
2316+
* should be ok to set w/o lock as it's in the thread of
23172317
* execution (not an async timer routine) and doesn't
23182318
* contend with any clearing action
23192319
*/
@@ -2629,7 +2629,7 @@ nvmet_fc_handle_fcp_rqst(struct nvmet_fc_tgtport *tgtport,
26292629
* and the api of the FC LLDD which may issue a hw command to send the
26302630
* response, but the LLDD may not get the hw completion for that command
26312631
* and upcall the nvmet_fc layer before a new command may be
2632-
* asynchronously received - its possible for a command to be received
2632+
* asynchronously received - it's possible for a command to be received
26332633
* before the LLDD and nvmet_fc have recycled the job structure. It gives
26342634
* the appearance of more commands received than fits in the sq.
26352635
* To alleviate this scenario, a temporary queue is maintained in the

drivers/nvme/target/rdma.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1731,7 +1731,7 @@ static void nvmet_rdma_queue_connect_fail(struct rdma_cm_id *cm_id,
17311731
* We registered an ib_client to handle device removal for queues,
17321732
* so we only need to handle the listening port cm_ids. In this case
17331733
* we nullify the priv to prevent double cm_id destruction and destroying
1734-
* the cm_id implicitely by returning a non-zero rc to the callout.
1734+
* the cm_id implicitly by returning a non-zero rc to the callout.
17351735
*/
17361736
static int nvmet_rdma_device_removal(struct rdma_cm_id *cm_id,
17371737
struct nvmet_rdma_queue *queue)
@@ -1742,7 +1742,7 @@ static int nvmet_rdma_device_removal(struct rdma_cm_id *cm_id,
17421742
/*
17431743
* This is a queue cm_id. we have registered
17441744
* an ib_client to handle queues removal
1745-
* so don't interfear and just return.
1745+
* so don't interfere and just return.
17461746
*/
17471747
return 0;
17481748
}
@@ -1760,7 +1760,7 @@ static int nvmet_rdma_device_removal(struct rdma_cm_id *cm_id,
17601760

17611761
/*
17621762
* We need to return 1 so that the core will destroy
1763-
* it's own ID. What a great API design..
1763+
* its own ID. What a great API design..
17641764
*/
17651765
return 1;
17661766
}

0 commit comments

Comments
 (0)