You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
NpgsqlRest is a .NET8 NuGet library that can build a RESTful API from the existing PostgreSQL routines (functions or procedures) with a single extension on a web application builder.
You can think of it as PostgREST, but only for routines (functions or procedures), since PostgREST is doing a lot more. It is implemented as .NET8 middleware so you can create custom builds, or integrate with your .NET8 projects, and it is highly customizable, with some unique features.
The library is AOT-ready, which means it can utilize .NET AOT feature to create self-contained, native binary executables, but more on that later.
Motivation
This is built to be a crucial part of the future low-code data platform based on PostgreSQL. And, as such it was built to satisfy very specific requirements (as opposed to catering to a wider audience). For example, it has two dependencies 1) the latest Microsoft.NET.Sdk.Web that targets net8.0 framework. 2) the latest Npgsql ADO.NET Data Provider for PostgreSQL (current version 8.0.1). Those dependencies will be updated with each new release. If you need to work with older versions, you need to either 1) update them in your project - or - 2) clone the NpgsqlRest repository and make a custom build with downgraded dependencies.
Furthermore, future developments such as new features highly depend on the main project that will be using this library, but with the right sponsorship, anything is possible.
Features and Usage
Most basic usage in a standard .NET8 web application looks like this:
usingNpgsqlRest;varbuilder=WebApplication.CreateBuilder(args);varapp=builder.Build();app.UseNpgsqlRest(new(){ConnectionString="Host=localhost;Port=5432;Database=my_db;Username=postgres;Password=postgres"//// the rest of the options goes here//});app.Run();
So, all you need to do is call UseNpgsqlRest extension for your web application builder and provide a valid Npgsql connection string for the PostgreSQL database. Of course, there are many available options that you can use to configure all endpoints are specific endpoints. The best way to explore available options is to check out the source code for the NpgsqlRestOptions class.
And now, if you happen to have some PostgreSQL functions, like for example this hello world functions:
createfunctionhello_world()
returns text
language sql
as $$
select'Hello World'
$$;
You will have a new RESTful endpoint exposed in your .NET application.
This endpoint will be a POST, currently implemented rules are:
The endpoint is GET if the routine name either:
Starts with get_ (case insensitive).
Ends with _get (case insensitive).
Contains _get_ (case insensitive).
Otherwise, the endpoint is POST if the volatility option is VOLATILE (PostgreSQL default). These types of functions are assumed to be changing some data. Of course, all of that is highly configurable either through code options or through comment annotations which we will see later.
And now, in order to enable endpoint discovery and to facilitate testing and debugging, NpgsqlRest can build an HTTP file for testing. HTTP files are not enabled by default, so they have to be enabled in the builder options:
This will produce the following HTTP file, which we can use for further testing and debugging with Visual Studio or VS Code (with REST Clint extensions):
@host=http://localhost:5000 // function public.hello_world()// returns textPOST {{host}}/api/hello-world
Note that !host! is, actually, just host, my template parser for this website is just screwing with me.
Anyway, you can notice that the HTTP file also contains a nicely formatted comment header with a basic function signature, to be clear about what exactly function will be executed. You can include even the entire function definition in this file if you want. Also, if you happen to have some parameters, the HTTP file will contain test parameters populated so that the endpoint is ready to be executed immediately.
And, when this endpoint is executed, the results are as expected:
HTTP/1.1 200 OK Connection: closeContent-Type: text/plainDate: Tue, 09 Jan 2024 14:25:26 GMTServer: KestrelTransfer-Encoding: chunkedHello World
That would be a basic usage (with HTTP file generation enabled). Other features may include (incomplete list):
Array parameters in query strings or JSON body and array type serialization in responses.
Entire body content to single parameter assignment.
Comment annotation endpoint configuration.
Default parameters support.
Schema control (different paths, HTTP files).
Parameter overloading.
Sending request headers as context, or as a parameter (or neither).
Returning set of primitive types.
Returning tables.
Returning set of existing tables (table-valued functions).
Returning set of record types (untyped functions).
Unnamed parameters support.
Variadic parameters support.
Custom naming conventions for URLs, request parameters or response JSON fields.
Custom validation events and custom parameter values.
Untyped functions have shown to be a very useful feature because underlying function doesn't require resulting table definition at all:
createfunctionget_test_records()
returns setof record -- see, no table defintion
language sql
as
$$
select*from (values
(1, 'one'),
(2, 'two'),
(3, 'three')
) v;
$$;
But, on the other hand, calling clients have to know which types to expect, because the resulting response will be a JSON array where every element is an array of raw PostgreSQL values encoded as JSON string.
Another interesting feature is the custom validation events. When, for example, you want to supply some custom value from your code to some parameters, The common example is sending Context.User.Identity.Name value as a function parameter:
app.UseNpgsqlRest(new(connectionString){ValidateParameters= p =>{//// If user is authenticated and parameter name is "user", always send Context.User.Identity.Name//if(p.Context.User.Identity?.IsAuthenticated==true&&string.Equals(p.ParamName,"user",StringComparison.OrdinalIgnoreCase)){p.Parameter.Value=p.Context.User.Identity.Name;}}});
There are also events when an endpoint is created and all endpoints are created which are very useful in the scenario where we want to generate some interfaces, like typescript interfaces for the frontend or even the entire fetch scripts, all required data is passed to these events.
For more usage examples for usage of these features, the best place is to check out the test project.
Now, there are to very unique features, that I'd like to discuss a little bit deeper:
Comment annotations.
AOT support.
Comment Annotations
The basic idea is that we can add text annotations to routine comments that can alter endpoint behavior and properties.
PostgreSQL supports adding custom comments to all objects since version, well, since forever. See the comment documentation. You can add any text to your functions or procedures:
COMMENT ON FUNCTION my_function (timestamp) IS 'Returns Roman Numeral';
COMMENT ON PROCEDURE my_proc (integer, integer) IS 'Runs a report';
In the example above function hello_world() generates HTTP endpoint POST http://localhost:5000/api/hello-world. We can change this to a GET endpoint with the following comment:
COMMENT ON FUNCTION hello_world () IS 'HTTP GET';
We can change the generated path to something else too:
COMMENT ON FUNCTION hello_world () IS 'HTTP GET /say-hello-world';
Our endpoint is now GET http://localhost:5000/say-hello-world. This approach doesn't stop us from adding custom comments. If the comment line is not a recognizable annotation, it is simply ignored, since it is, well, just a comment. For example, the following comment has the same effect:
COMMENT ON FUNCTION hello_world () IS 'Simple hello world example.HTTP GET /say-hello-worldAnother comment line...';
Valid comment annotation has to be in line that starts with the HTTP and the next two optional words are the method verb and the URL. All additional annotations are below this HTTP line. For example, if we add a line that contains a colon (:) character below this HTTP line - that would be interpreted as a new request header where the left side of the colon is a key and the right side is the header value:
COMMENT ON FUNCTION hello_world () IS 'HTTP GET /say-hello-worldContent-Type: text/html
This forces content returned from the GET /say-hello-world to have a header Content-Type: text/html.
Note that these comments don't have to be controlled through code like this. For some people, it is much easier to do through administrator GUI such as provided with tools like pgAdmin or DBeaver, just click on function properties and you can fill the comment box in your GUI tool. But you do have to do it with a database user with elevated permissions (superuser or admin).
You can control lots of other cool stuff like this:
Request parameters type (query string or JSON body). For example, param-type query line forces parameters in a query string.
Authorization control: Just adding requires-authorization or authorize will make the endpoint return not authorized if the user is not, well, authorized.
Command timeout: Add timeout <number> to set the command timeout to the number of seconds in the second parameter.
Request headers mode: Use request-headers <option> to set the mode to ignore, context (request headers are set to context) or parameter (request headers are set to parameter).
Request headers parameter name: Use request-headers-param-name <name> to set the name of the parameter where request headers are sent if the option request-headers parameter is set. The default is `header``, but you can change it here.
Body parameter name. You can bind an entire request body content to a single parameter with the body-param-name <name> annotation.
More info on this can be found in a default endpoint parser code DefaultEndpoint.cs and usage examples are also in a test project.
By default, all comments will be parsed for annotations. This behavior can be changed in a builder options configuration:
Then, we can control is endpoint created at all. That means, that the comment must have at least an HTTP tag:
COMMENT ON FUNCTION hello_world () IS 'HTTP';
For the endpoint to be enabled at all. This way, we (or DB admin) can control the creation of the endpoint and choose what exactly functions and procedures are exposed as the HTTP endpoints.
AOT Support
This was a design goal from the beginning. The idea is to create a native executable for the target platform (usually Linux) and just simply copy it on a server and run it as any other program. There is no need for any runtime, the executable is self-contained and it has a very fast startup time.
This, of course, is optional, AOT builds are not for every environment, and some requirements need to be satisfied (like no reflection code). But if you do choose to have AOT build, this is the minimal configuration.
First, you'll need an <PublishAot>true</PublishAot> tag in the property group of your project file. It can look like this:
This is just the minimal build, naturally, you can add things like logging and configuration and customize endpoints to your needs.
The next step is to launch a build process with dotnet publish -r linux-x64 -c Release. This step has to be performed in a Linux environment since we are targeting a Linux build, and, at this point, the cross-OS native compilation is not supported yet. This will produce a runnable, self-contained executable that is large exactly 20041896 bytes or approximately 20MB.
There, you don't need a Docker.
Performances
I was excited to see some performance benchmarks since I already know how well is .NET8 optimized for performances. The only thing I could compare those benchmarks to is the performance of the same operations of the PostgREST instance.
First, some numbers:
NpgsqlRest AOT Linux executable is 20050120 bytes or approximately 20MB.
PostgREST version 12.0.2 Linux executable is 17398464 bytes or approximately 17.3MB.
As far as memory usage goes, it does seem that NpgsqlRest is a bit better, but, again it is far less feature-rich than PostgREST. Here is the output from the ps aux:
For a testing scenario, I've created a single PostgreSQL function that has 5 parameters of a different type and returns a 10-field table with the number of records from the first parameter:
createfunctionperf_test(
_records int,
_text_param text,
_int_param int,
_ts_param timestamp,
_bool_param bool
)
returns table(
id1 int,
foo1 text,
bar1 text,
datetime1 timestamp,
id2 int,
foo2 text,
bar2 text,
datetime2 timestamp,
long_foo_bar text,
is_foobar bool
)
language sql
as
$$
select
i + _int_param as id1,
'foo'||'_'|| _text_param ||'_'|| i::textas foo1,
'bar'|| i::textas bar1,
(_ts_param::date) + (i::text||' days')::interval as datetime1,
i+1+ _int_param as id2,
'foo'||'_'|| _text_param ||'_'|| (i+1)::textas foo2,
'bar'||'_'|| _text_param ||'_'|| (i+1)::textas bar2,
(_ts_param::date) + ((i+1)::text||' days')::interval as datetime2,
'long_foo_bar_'||'_'|| _text_param ||'_'|| (i+2)::textas long_foo_bar,
(i % 2)::booleanand _bool_param as is_foobar
from
generate_series(1, _records) as i
$$;
The testing suite is the K6 load testing system from Grafana Labs and there are three different testing scenarios prepared:
The 60-second performance tests with 100 simulated virtual users retrieving 1 record from the endpoint.
The 60-second performance tests with 100 simulated virtual users retrieving 5 records from the endpoint.
The 60-second performance tests with 100 simulated virtual users retrieving 50 records from the endpoint.
The 60-second performance tests with 100 simulated virtual users retrieving 100 records from the endpoint.
In short, running 60 seconds for 100 simulated virtual users, PostgREST has processed 60.875 successful requests and NpgsqlRest has processed 280.845 successful requests! NpgsqlRest is 4,61 times faster and can handle that many more requests!
I knew that .NET8 is very well optimized but I didn't expect that much. Being able to process 280K requests, instead of 60K is very much significant and a money saver.
Next, I wanted to see how results would change if I started returning more than 1 record. When a PostgreSQL call returns a record, it needs to be converted from a raw format returned from the server into JSON which requires a fair amount of string manipulation. So in that case performance of the built-in web server would be less important. I ran performance tests again for 5, 50 and 100 returned records. Here is the table with condensed results:
Records
NpgsqlRest requests
PostgREST requests
Ratio
1
280.845
60.875
4,61
5
310.123
56.188
5,51
50
164.208
71.430
2,29
100
143.110
57.217
2,50
Note that you can see all test files and complete detailed results output in a performance test source code. All tests were conducted on my poor little laptop. If someone has the time and will, they can be repeated in some more isolated environment.
Now, we can see that the ratio is getting lower with a higher number of returned records. I'm not sure what exactly to make of it.
Yes, well, .NET8 is faster, that much is clear for now. Somewhere between 2,5 and 5,5 times approximately. It depends as developers like to say.
Roadmap
As I said before, NpgsqlRest will be a crucial part of the future data platform, and as such the active development will continue. However, I cannot guarantee that all new features (or even fixes) will be published and open-sourced. That might change if enough interest is generated and perhaps with the right sponsorship because I don't have much of it as it stands.
However, that is not the reason not to discuss and consider possible features:
Live Reload Feature
The idea behind this feature would be the ability to recreate endpoints without having to restart the application. You could simply turn it on or off (or change the URL or something else) - by using comment annotations and do the live refresh without restarting.
First of all, this would require some serious refactoring. And I'm not entirely convinced that is needed, since a lot of accompanying code is going to be generated on endpoint creation events.
Tables And Views Support
PostgREST supports endpoints from tables and views, why shouldn't the NpgsqlRest as well?
I don't know about this. I can expose a view for example as an endpoint, but could be dangerous because the view can return an insane amount of data that would crash your application. Same as tables.
It would be interesting to have the ability to enable CRUD endpoints for certain tables, possibly with conflict resolution and all. But that approach feels to me very much like a giant can of worms that I don't wish to open. A sort of HTTP ORM!? Yikes.
Besides, my design philosophy is to encapsulate everything into SQL functions and procedures. It isn't that hard to write a function that returns your table or your view, is it?
Other Service Types
Like gRPC services for example? Or SignalR socket streaming?
Certainly, but I cannot guarantee that it will be open-sourced, maybe it will, I don't know. Or even that I would make something like that. However, from this standpoint, it shouldn't be too hard (if not even trivial). Luckily this library is open source and you can try it yourself.
That is it.
The library is already a bit mature in version 1.4.0 and is being used actively on one of my projects. If you find it useful and interesting visit a GitHub repository and give it a star.
I would also like to hear in comments what other people think. Especially about those performance tests above, because I don't know what to say, that goes beyond my expertise. I'm just a guy who likes to write SQL. So, leave a comment below in a comment section and you can ask me anything.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
NpgsqlRest NuGet Library
NpgsqlRest
is a .NET8 NuGet library that can build a RESTful API from the existing PostgreSQL routines (functions or procedures) with a single extension on a web application builder.You can think of it as
PostgREST
, but only for routines (functions or procedures), sincePostgREST
is doing a lot more. It is implemented as .NET8 middleware so you can create custom builds, or integrate with your .NET8 projects, and it is highly customizable, with some unique features.The library is AOT-ready, which means it can utilize .NET AOT feature to create self-contained, native binary executables, but more on that later.
Motivation
This is built to be a crucial part of the future low-code data platform based on PostgreSQL. And, as such it was built to satisfy very specific requirements (as opposed to catering to a wider audience). For example, it has two dependencies 1) the latest
Microsoft.NET.Sdk.Web
that targetsnet8.0
framework. 2) the latestNpgsql
ADO.NET Data Provider for PostgreSQL (current version 8.0.1). Those dependencies will be updated with each new release. If you need to work with older versions, you need to either 1) update them in your project - or - 2) clone theNpgsqlRest
repository and make a custom build with downgraded dependencies.Furthermore, future developments such as new features highly depend on the main project that will be using this library, but with the right sponsorship, anything is possible.
Features and Usage
Most basic usage in a standard .NET8 web application looks like this:
So, all you need to do is call
UseNpgsqlRest
extension for your web application builder and provide a validNpgsql
connection string for the PostgreSQL database. Of course, there are many available options that you can use to configure all endpoints are specific endpoints. The best way to explore available options is to check out the source code for theNpgsqlRestOptions
class.And now, if you happen to have some PostgreSQL functions, like for example this hello world functions:
You will have a new RESTful endpoint exposed in your .NET application.
This endpoint will be a POST, currently implemented rules are:
The endpoint is GET if the routine name either:
get_
(case insensitive)._get
(case insensitive)._get_
(case insensitive).Otherwise, the endpoint is POST if the volatility option is VOLATILE (PostgreSQL default). These types of functions are assumed to be changing some data. Of course, all of that is highly configurable either through code options or through comment annotations which we will see later.
And now, in order to enable endpoint discovery and to facilitate testing and debugging,
NpgsqlRest
can build an HTTP file for testing. HTTP files are not enabled by default, so they have to be enabled in the builder options:This will produce the following HTTP file, which we can use for further testing and debugging with Visual Studio or VS Code (with REST Clint extensions):
!host!
is, actually, justhost
, my template parser for this website is just screwing with me.Anyway, you can notice that the HTTP file also contains a nicely formatted comment header with a basic function signature, to be clear about what exactly function will be executed. You can include even the entire function definition in this file if you want. Also, if you happen to have some parameters, the HTTP file will contain test parameters populated so that the endpoint is ready to be executed immediately.
And, when this endpoint is executed, the results are as expected:
That would be a basic usage (with HTTP file generation enabled). Other features may include (incomplete list):
Untyped functions have shown to be a very useful feature because underlying function doesn't require resulting table definition at all:
But, on the other hand, calling clients have to know which types to expect, because the resulting response will be a JSON array where every element is an array of raw PostgreSQL values encoded as JSON string.
Another interesting feature is the custom validation events. When, for example, you want to supply some custom value from your code to some parameters, The common example is sending
Context.User.Identity.Name
value as a function parameter:There are also events when an endpoint is created and all endpoints are created which are very useful in the scenario where we want to generate some interfaces, like typescript interfaces for the frontend or even the entire fetch scripts, all required data is passed to these events.
For more usage examples for usage of these features, the best place is to check out the test project.
Now, there are to very unique features, that I'd like to discuss a little bit deeper:
Comment Annotations
The basic idea is that we can add text annotations to routine comments that can alter endpoint behavior and properties.
PostgreSQL supports adding custom comments to all objects since version, well, since forever. See the comment documentation. You can add any text to your functions or procedures:
In the example above
function hello_world()
generates HTTP endpointPOST http://localhost:5000/api/hello-world
. We can change this to a GET endpoint with the following comment:We can change the generated path to something else too:
Our endpoint is now
GET http://localhost:5000/say-hello-world
. This approach doesn't stop us from adding custom comments. If the comment line is not a recognizable annotation, it is simply ignored, since it is, well, just a comment. For example, the following comment has the same effect:Valid comment annotation has to be in line that starts with the HTTP and the next two optional words are the method verb and the URL. All additional annotations are below this HTTP line. For example, if we add a line that contains a colon (
:
) character below this HTTP line - that would be interpreted as a new request header where the left side of the colon is a key and the right side is the header value:This forces content returned from the
GET /say-hello-world
to have a headerContent-Type: text/html
.Note that these comments don't have to be controlled through code like this. For some people, it is much easier to do through administrator GUI such as provided with tools like pgAdmin or DBeaver, just click on function properties and you can fill the comment box in your GUI tool. But you do have to do it with a database user with elevated permissions (superuser or admin).
You can control lots of other cool stuff like this:
param-type query
line forces parameters in a query string.requires-authorization
orauthorize
will make the endpoint return not authorized if the user is not, well, authorized.timeout <number>
to set the command timeout to the number of seconds in the second parameter.request-headers <option>
to set the mode toignore
,context
(request headers are set to context) orparameter
(request headers are set to parameter).request-headers-param-name <name>
to set the name of the parameter where request headers are sent if the optionrequest-headers parameter
is set. The default is `header``, but you can change it here.body-param-name <name>
annotation.More info on this can be found in a default endpoint parser code DefaultEndpoint.cs and usage examples are also in a test project.
By default, all comments will be parsed for annotations. This behavior can be changed in a builder options configuration:
If we set this mode to
OnlyWithHttpTag
value:Then, we can control is endpoint created at all. That means, that the comment must have at least an
HTTP
tag:For the endpoint to be enabled at all. This way, we (or DB admin) can control the creation of the endpoint and choose what exactly functions and procedures are exposed as the HTTP endpoints.
AOT Support
This was a design goal from the beginning. The idea is to create a native executable for the target platform (usually Linux) and just simply copy it on a server and run it as any other program. There is no need for any runtime, the executable is self-contained and it has a very fast startup time.
This, of course, is optional, AOT builds are not for every environment, and some requirements need to be satisfied (like no reflection code). But if you do choose to have AOT build, this is the minimal configuration.
First, you'll need an
<PublishAot>true</PublishAot>
tag in the property group of your project file. It can look like this:And then, the minimal build in the main program looks like this:
This is just the minimal build, naturally, you can add things like logging and configuration and customize endpoints to your needs.
The next step is to launch a build process with
dotnet publish -r linux-x64 -c Release
. This step has to be performed in a Linux environment since we are targeting a Linux build, and, at this point, the cross-OS native compilation is not supported yet. This will produce a runnable, self-contained executable that is large exactly 20041896 bytes or approximately 20MB.There, you don't need a Docker.
Performances
I was excited to see some performance benchmarks since I already know how well is .NET8 optimized for performances. The only thing I could compare those benchmarks to is the performance of the same operations of the
PostgREST
instance.First, some numbers:
NpgsqlRest
AOT Linux executable is 20050120 bytes or approximately 20MB.PostgREST
version 12.0.2 Linux executable is 17398464 bytes or approximately 17.3MB.As far as memory usage goes, it does seem that
NpgsqlRest
is a bit better, but, again it is far less feature-rich thanPostgREST
. Here is the output from theps aux
:For a testing scenario, I've created a single PostgreSQL function that has 5 parameters of a different type and returns a 10-field table with the number of records from the first parameter:
The testing suite is the K6 load testing system from Grafana Labs and there are three different testing scenarios prepared:
Let's see the results for the first scenario.
Results
PostgREST
:NpgsqlRest
:In short, running 60 seconds for 100 simulated virtual users,
PostgREST
has processed 60.875 successful requests andNpgsqlRest
has processed 280.845 successful requests!NpgsqlRest
is 4,61 times faster and can handle that many more requests!I knew that .NET8 is very well optimized but I didn't expect that much. Being able to process 280K requests, instead of 60K is very much significant and a money saver.
Next, I wanted to see how results would change if I started returning more than 1 record. When a PostgreSQL call returns a record, it needs to be converted from a raw format returned from the server into JSON which requires a fair amount of string manipulation. So in that case performance of the built-in web server would be less important. I ran performance tests again for 5, 50 and 100 returned records. Here is the table with condensed results:
Note that you can see all test files and complete detailed results output in a performance test source code. All tests were conducted on my poor little laptop. If someone has the time and will, they can be repeated in some more isolated environment.
Now, we can see that the ratio is getting lower with a higher number of returned records. I'm not sure what exactly to make of it.
Yes, well, .NET8 is faster, that much is clear for now. Somewhere between 2,5 and 5,5 times approximately. It depends as developers like to say.
Roadmap
As I said before,
NpgsqlRest
will be a crucial part of the future data platform, and as such the active development will continue. However, I cannot guarantee that all new features (or even fixes) will be published and open-sourced. That might change if enough interest is generated and perhaps with the right sponsorship because I don't have much of it as it stands.However, that is not the reason not to discuss and consider possible features:
Live Reload Feature
The idea behind this feature would be the ability to recreate endpoints without having to restart the application. You could simply turn it on or off (or change the URL or something else) - by using comment annotations and do the live refresh without restarting.
First of all, this would require some serious refactoring. And I'm not entirely convinced that is needed, since a lot of accompanying code is going to be generated on endpoint creation events.
Tables And Views Support
PostgREST supports endpoints from tables and views, why shouldn't the NpgsqlRest as well?
I don't know about this. I can expose a view for example as an endpoint, but could be dangerous because the view can return an insane amount of data that would crash your application. Same as tables.
It would be interesting to have the ability to enable CRUD endpoints for certain tables, possibly with conflict resolution and all. But that approach feels to me very much like a giant can of worms that I don't wish to open. A sort of HTTP ORM!? Yikes.
Besides, my design philosophy is to encapsulate everything into SQL functions and procedures. It isn't that hard to write a function that returns your table or your view, is it?
Other Service Types
Like gRPC services for example? Or SignalR socket streaming?
Certainly, but I cannot guarantee that it will be open-sourced, maybe it will, I don't know. Or even that I would make something like that. However, from this standpoint, it shouldn't be too hard (if not even trivial). Luckily this library is open source and you can try it yourself.
That is it.
The library is already a bit mature in version
1.4.0
and is being used actively on one of my projects. If you find it useful and interesting visit a GitHub repository and give it a star.I would also like to hear in comments what other people think. Especially about those performance tests above, because I don't know what to say, that goes beyond my expertise. I'm just a guy who likes to write SQL. So, leave a comment below in a comment section and you can ask me anything.
Beta Was this translation helpful? Give feedback.
All reactions