|
| 1 | += Netty Incubator Buffer API |
| 2 | + |
| 3 | +This repository is incubating a new buffer API proposed for Netty 5. |
| 4 | + |
| 5 | +== Building and Testing |
| 6 | + |
| 7 | +Short version: just run `make`. |
| 8 | + |
| 9 | +The project currently relies on snapshot versions of the https://github.com/openjdk/panama-foreign[Panama Foreign] fork of OpenJDK. |
| 10 | +This allows us to test out the most recent version of the `jdk.incubator.foreign` APIs, but also make building, and local development more involved. |
| 11 | +To simplify things, we have a Docker based build, controlled via a Makefile with the following commands: |
| 12 | + |
| 13 | +* `image` – build the docker image.This includes building a snapshot of OpenJDK, and download all relevant Maven dependencies. |
| 14 | +* `test` – run all tests in a docker container.This implies `image`.The container is automatically deleted afterwards. |
| 15 | +* `dbg` – drop into a shell in the build container, without running the build itself.The debugging container is not deleted afterwards. |
| 16 | +* `clean` – remove the leftover containers created by `dbg`, `test`, and `build`. |
| 17 | +* `build` – build binaries and run all tests in a container, and copy the `target` directory out of the container afterwards.This is the default build target. |
| 18 | + |
| 19 | +== Example: Echo Client and Server |
| 20 | + |
| 21 | +Making use of this new buffer API on the client side is quite easy. |
| 22 | +Even though Netty 5 does not have native support for these buffers, it is able to convert them to the old `ByteBuf` API as needed. |
| 23 | +This means we are able to send incubator buffers through a Netty pipeline, and have it work as if we were sending `ByteBuf` instances. |
| 24 | + |
| 25 | +[source,java] |
| 26 | +---- |
| 27 | +public final class Client { |
| 28 | + public static void main(String[] args) throws Exception { |
| 29 | + EventLoopGroup group = new MultithreadEventLoopGroup(NioHandler.newFactory()); |
| 30 | + try (BufferAllocator allocator = BufferAllocator.pooledDirect()) { // <1> |
| 31 | + Bootstrap b = new Bootstrap(); |
| 32 | + b.group(group) |
| 33 | + .channel(NioSocketChannel.class) |
| 34 | + .option(ChannelOption.TCP_NODELAY, true) |
| 35 | + .handler(new ChannelInitializer<SocketChannel>() { |
| 36 | + @Override |
| 37 | + public void initChannel(SocketChannel ch) throws Exception { |
| 38 | + ch.pipeline().addLast(new ChannelHandlerAdapter() { |
| 39 | + @Override |
| 40 | + public void channelActive(ChannelHandlerContext ctx) { |
| 41 | + Buffer message = allocator.allocate(256); // <2> |
| 42 | + for (int i = 0; i < message.capacity(); i++) { |
| 43 | + message.writeByte((byte) i); |
| 44 | + } |
| 45 | + ctx.writeAndFlush(message); // <3> |
| 46 | + } |
| 47 | + }); |
| 48 | + } |
| 49 | + }); |
| 50 | +
|
| 51 | + // Start the client. |
| 52 | + ChannelFuture f = b.connect("127.0.0.1", 8007).sync(); |
| 53 | +
|
| 54 | + // Wait until the connection is closed. |
| 55 | + f.channel().closeFuture().sync(); |
| 56 | + } finally { |
| 57 | + // Shut down the event loop to terminate all threads. |
| 58 | + group.shutdownGracefully(); |
| 59 | + } |
| 60 | + } |
| 61 | +} |
| 62 | +---- |
| 63 | +<1> A life-cycled allocator is created to wrap the scope of our application. |
| 64 | +<2> Buffers are allocated with one of the `allocate` methods. |
| 65 | +<3> The buffer can then be sent down the pipeline, and will be written to the socket just like a `ByteBuf` would. |
| 66 | + |
| 67 | +[NOTE] |
| 68 | +-- |
| 69 | +The same is not the case for `BufferHolder`. |
| 70 | +It is not treated the same as a `ByteBufHolder`. |
| 71 | +-- |
| 72 | + |
| 73 | +On the server size, things are more complicated because Netty itself will be allocating the buffers, and the `ByteBufAllocator` API is only capable of returning `ByteBuf` instances. |
| 74 | +The `ByteBufAllocatorAdaptor` will allocate `ByteBuf` instances that are backed by the new buffers. |
| 75 | +The buffers can then we extracted from the `ByteBuf` instances with the `ByteBufAdaptor.extract` method. |
| 76 | + |
| 77 | +We can tell a Netty server how to allocate buffers by setting the `ALLOCATOR` child-channel option: |
| 78 | + |
| 79 | +[source,java] |
| 80 | +---- |
| 81 | +ByteBufAllocatorAdaptor allocator = new ByteBufAllocatorAdaptor(); // <1> |
| 82 | +ServerBootstrap server = new ServerBootstrap(); |
| 83 | +server.group(bossGroup, workerGroup) |
| 84 | + .channel(NioServerSocketChannel.class) |
| 85 | + .childOption(ChannelOption.ALLOCATOR, allocator) // <2> |
| 86 | + .handler(new EchoServerHandler()); |
| 87 | +---- |
| 88 | +<1> The `ByteBufAllocatorAdaptor` implements `ByteBufAllocator`, and directly allocates `ByteBuf` instances that are backed by buffers that use the new API. |
| 89 | +<2> To make Netty use a given allocator when allocating buffers for receiving data, we set the allocator as a child option. |
| 90 | + |
| 91 | +With the above, we just changed how the buffers are allocated, but we haven't changed the API we use for interacting with the buffers. |
| 92 | +The buffers are still allocated at `ByteBuf` instances, and flow through the pipeline as such. |
| 93 | +If we want to use the new buffer API in our server handlers, we have to extract the buffers from the `ByteBuf` instances that are passed down: |
| 94 | + |
| 95 | +[source,java] |
| 96 | +---- |
| 97 | +import io.netty.buffer.ByteBuf; |
| 98 | +import io.netty.buffer.api.Buffer; |
| 99 | +import io.netty.buffer.api.adaptor.ByteBufAdaptor; |
| 100 | +
|
| 101 | +@Sharable |
| 102 | +public class EchoServerHandler implements ChannelHandler { |
| 103 | + @Override |
| 104 | + public void channelRead(ChannelHandlerContext ctx, Object msg) { // <1> |
| 105 | + if (msg instanceof ByteBuf) { // <2> |
| 106 | + // For this example, we only echo back buffers that are using the new buffer API. |
| 107 | + Buffer buf = ByteBufAdaptor.extract((ByteBuf) msg); // <3> |
| 108 | + ctx.write(buf); // <4> |
| 109 | + } |
| 110 | + } |
| 111 | +
|
| 112 | + @Override |
| 113 | + public void channelReadComplete(ChannelHandlerContext ctx) { |
| 114 | + ctx.flush(); |
| 115 | + } |
| 116 | +} |
| 117 | +---- |
| 118 | +<1> Netty pipelines are defined as transferring `Object` instances as messages. |
| 119 | +<2> When we receive data directly from a socket, these messages will be `ByteBuf` instances with the received data. |
| 120 | +<3> Since we set the allocator to create `ByteBuf` instances that are backed by buffers with the new API, we will be able to extract the backing `Buffer` instances. |
| 121 | +<4> We can then operate on the extracted `Buffer` instances directly. |
| 122 | +The `Buffer` and `ByteBuf` instances mirror each other exactly. |
| 123 | +In this case, we just write them back to the client that sent the data to us. |
| 124 | + |
| 125 | +The files in `src/test/java/io/netty/buffer/api/examples/echo` for the full source code to this example. |
0 commit comments