Skip to content

Commit 7b171f4

Browse files
committed
Merge bitcoin-core#234: doc: Fix typos and grammar in documentation and comments
458745e Fix various typos, spelling mistakes, and grammatical errors in design.md and source code comments. (Ehnamuram Enoch) Pull request description: This PR fixes various typos, spelling mistakes, and grammatical errors found in `doc/design.md` and source code comments to improve readability. ACKs for top commit: maflcko: lgtm ACK 458745e ryanofsky: Code review ACK 458745e. Thanks for the fixes! Tree-SHA512: 61f4de5369aef2f2c963c95a3d898a70c7c7a2bbe1a17ae2ac47d89e5436ad16b68cf2e7849da1a6701fd247509eba2dfef0e4db32d5583ebe8d735ae347049a
2 parents 585decc + 458745e commit 7b171f4

File tree

5 files changed

+16
-16
lines changed

5 files changed

+16
-16
lines changed

doc/design.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ If this is the first asynchronous request made from the current client thread, `
199199
3. Create a local `Thread::Server` object for the current thread (stored in `callback_threads` map)
200200
4. Set the local thread capability in `Context.callbackThread`
201201

202-
Subsequent requests will resuse the existing thread capabilites held in `callback_threads` and `request_threads`.
202+
Subsequent requests will reuse the existing thread capabilities held in `callback_threads` and `request_threads`.
203203

204204
**Server side** (`PassField`):
205205
1. Looks up the local `Thread::Server` object specified by `context.thread`

include/mp/proxy-io.h

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -317,12 +317,12 @@ class EventLoop
317317
void* m_context;
318318
};
319319

320-
//! Single element task queue used to handle recursive capnp calls. (If server
321-
//! makes an callback into the client in the middle of a request, while client
320+
//! Single element task queue used to handle recursive capnp calls. (If the
321+
//! server makes a callback into the client in the middle of a request, while the client
322322
//! thread is blocked waiting for server response, this is what allows the
323-
//! client to run the request in the same thread, the same way code would run in
324-
//! single process, with the callback sharing same thread stack as the original
325-
//! call.
323+
//! client to run the request in the same thread, the same way code would run in a
324+
//! single process, with the callback sharing the same thread stack as the original
325+
//! call.)
326326
struct Waiter
327327
{
328328
Waiter() = default;

include/mp/proxy-types.h

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ struct StructField
8787
// actually construct the read destination object. For example, if a std::string
8888
// is being read, the ReadField call will call the custom emplace_fn with char*
8989
// and size_t arguments, and the emplace function can decide whether to call the
90-
// constructor via the operator or make_shared or emplace or just return a
90+
// constructor via the operator, make_shared, emplace or just return a
9191
// temporary string that is moved from.
9292
template <typename LocalType, typename EmplaceFn>
9393
struct ReadDestEmplace
@@ -205,11 +205,11 @@ void BuildField(TypeList<LocalTypes...>, Context& context, Output&& output, Valu
205205
}
206206
}
207207

208-
// Adapter to let BuildField overloads methods work set & init list elements as
209-
// if they were fields of a struct. If BuildField is changed to use some kind of
210-
// accessor class instead of calling method pointers, then then maybe this could
211-
// go away or be simplified, because would no longer be a need to return
212-
// ListOutput method pointers emulating capnp struct method pointers..
208+
// Adapter that allows BuildField overloads to work with, set, and initialize list
209+
// elements as if they were fields of a struct. If BuildField is changed to use some
210+
// kind of accessor class instead of calling method pointers, then maybe this could
211+
// go away or be simplified, because there would no longer be a need to return
212+
// ListOutput method pointers emulating capnp struct method pointers.
213213
template <typename ListType>
214214
struct ListOutput;
215215

src/mp/gen.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ static bool BoxedType(const ::capnp::Type& type)
136136
// include_prefix can be used to control relative include paths used in
137137
// generated files. For example if src_file is "/a/b/c/d/file.canp" and
138138
// include_prefix is "/a/b/c" include lines like
139-
// "#include <d/file.capnp.proxy.h>" "#include <d/file.capnp.proxy-types.h>"i
139+
// "#include <d/file.capnp.proxy.h>", "#include <d/file.capnp.proxy-types.h>"
140140
// will be generated.
141141
static void Generate(kj::StringPtr src_prefix,
142142
kj::StringPtr include_prefix,

src/mp/proxy.cpp

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -102,12 +102,12 @@ Connection::~Connection()
102102
// The ProxyClient cleanup handlers are synchronous because they are fast
103103
// and don't do anything besides release capnp resources and reset state so
104104
// future calls to client methods immediately throw exceptions instead of
105-
// trying to communicating across the socket. The synchronous callbacks set
105+
// trying to communicate across the socket. The synchronous callbacks set
106106
// ProxyClient capability pointers to null, so new method calls on client
107107
// objects fail without triggering i/o or relying on event loop which may go
108108
// out of scope or trigger obscure capnp i/o errors.
109109
//
110-
// The ProxySever cleanup handlers call user defined destructors on server
110+
// The ProxyServer cleanup handlers call user defined destructors on the server
111111
// object, which can run arbitrary blocking bitcoin code so they have to run
112112
// asynchronously in a different thread. The asynchronous cleanup functions
113113
// intentionally aren't started until after the synchronous cleanup
@@ -136,7 +136,7 @@ Connection::~Connection()
136136
//
137137
// Either way disconnect code runs in the event loop thread and called both
138138
// on clean and unclean shutdowns. In unclean shutdown case when the
139-
// connection is broken, sync and async cleanup lists will filled with
139+
// connection is broken, sync and async cleanup lists will be filled with
140140
// callbacks. In the clean shutdown case both lists will be empty.
141141
Lock lock{m_loop->m_mutex};
142142
while (!m_sync_cleanup_fns.empty()) {

0 commit comments

Comments
 (0)