Skip to content

Conversation

ernestoresende
Copy link
Collaborator

Summary

This is an exploratory attempt at an API design for @as-integrations/google-cloud-functions that addresses the problems with the current implementation and some of the limitations imposed by the Google Cloud Functions Framework.

The problem

Considering @as-integrations/google-cloud-functions as the library code for server applications, any application trying to use the integration today has to follow a very specific bundling process that involves:

  • Explicitly declaring a functionTarget either using a hard-coded string or through an environment variable;
  • Telling the bundler to inject dependency code into the bundle source-code directly, without relying on importing an external dependency;

This all stems from the fact that Google Cloud Functions is incapable of following module imports in order to look for the function signature it should run (I would really like to be wrong about this, but as far as I know, and based on a series of tests trying to see this pattern works, this is what I've observed).

How it works now?

The way this works now is by implementing the function target as a string in the http method provided by @google-cloud/functions-framework internally with the function handler:

import { http } from '@google-cloud/functions-framework';

export function startServerAndCreateGoogleCloudFunctionsHandler<Context extends BaseContext>(
  server: ApolloServer<Context>,
  options: Options<Context>
) {
    server.startInBackgroundHandlingStartupErrorsByLoggingAndFailingAllRequests();
    const contextFunction = options?.context || defaultContext;

    const handler = async (req, res) => {
      /** Request handler code */	
    }

  return http(options.functionTarget, handler);
}

That string is taken from the options object supplied when starting the Apollo Server instance on the server application:

startServerAndCreateGoogleCloudFunctionsHandler(server, {
  functionTarget: 'myCustomFunctionName',
  // or
  functionTarget: process.env.FUNCTION_TARGET as string,
	
});

When bundled correctly, we end up with something like this:

var server = new import_server.ApolloServer({
  typeDefs,
  resolvers,
  introspection: true,
});

(0, import_google_cloud_functions.startServerAndCreateGoogleCloudFunctionsHandler)(server, {
  functionTarget: "apollo-graphql"
});

The current implementation is heavily dependent on setting up custom bundler rules outside library scope to make sure the function signature is "visible" to Google Cloud Functions (the recommended setup from the /examples directory).

Furthermore, when supplied with an environment variable, we need more custom rules to ensure that the bundler replaces the process.env call with the actual value on the bundled code.

Proposed solution

When thinking about serverless function composition, we often think about having control over an explicitly declared function handler. This is a pattern currently shared with all major cloud providers that offer FaaS solutions (with the exception of a few differences between function parameters and runtime specificity):

export async function handler(req: Request, res: Response) {
  // handler stuff
}

With this in mind, the proposed solution is to export three core functions that will help the developer compose their own handler: requestProxy, responseProxy and startServer.

image

This approach extracts away the handler function from inside the library code, giving the developer more control over the handler implementation, while still keeping the amount of setup needed to have a functional instance of Apollo Server to a minimum.

This is an example of the most basic implementation:

import { ApolloServer } from '@apollo/server';
import { requestProxy, responseProxy, startServer } from '@as-integrations/google-cloud-functions';

import type { Request, Response } from '@google-cloud/functions-framework';

/** Resolvers and type definitions... */

const apolloServer = new ApolloServer({
  typeDefs,
  resolvers,
});

const server = startServer(apolloServer, {});

export async function handler(req: Request, res: Response) {
  const graphQLReponse = await requestProxy({ req, res, server });
  await responseProxy(res, graphQLReponse);
}

Just like in the current library implementation, the server instance is initiated outside the request handler (starting once per container, not once per request). The developer can still pass on custom context properties using the options object from startServer.

startServer will now also return an object containing the created server instance, and the provided options. This object is used to pipe the server and options back to requestProxy in order to enable the usage of executeHTTPGraphQLRequest.

Compared to the current, this implementation gives some hard to ignore advantages:

  • The developer has full control over the handler, which allows them to create further customizations (middlewares, header handling and whatnots) before processing the request and sending the GraphQL response back to the client.
  • It solves the underlying issue with how Google Cloud Functions looks for function targets in the codebase.
  • Since the name of the function is declared in the server application's entry point, no more complicated bundling configurations need to be enforced (aside from compiling to JavaScript when writing in TypeScript, which is expected).

Why is this an "RFC"?

While I believe this is a clear improvement over the current implementation for this specific integration, it is also somewhat dissonant from the current API implementation, which makes this a breaking change.

Since this is an attempt at stabilizing an API proposal before a v1 release of this integration, I would like to request some community feedback from both users and maintainers of other integrations in this space, in order to decide if we should go forward with this approach.

It is also likely that I could have overlooked some fundamental aspect of how Google Cloud Functions works while trying different approaches around the current API implementation. If someone identifies that this is the case, don't hesitate to reach out.

pothos was added to `/example` directory to
give a slightly more complex usage visualization
and give a more accessible layer into accessing
contexts from within query resolvers
update build pipeline to correctly output type
definitions, sourcemaps, and cjs/esm modules
update `/example`directory to remove unneeded
bundler configuration and conform to new
integration API
@ernestoresende ernestoresende added documentation Improvements or additions to documentation enhancement New feature or request labels May 15, 2023
@ernestoresende ernestoresende self-assigned this May 15, 2023
Copy link
Contributor

@trevor-scheer trevor-scheer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The API changes seem pretty reasonable to me. Since this is a pretty significant change, you can consider releasing an alpha (maybe land this to a next branch) to try it out and field feedback from others who might be using it.

};
}>({});

builder.queryType();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this line doing? Is .queryType() effectful?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually from Photos, the schema builder, it's what you call for it to initialize a root Query on the GraphQL schema.

Honestly, Pothos is only there because I needed something I was familiar with to validate that the context was being correctly piped through the handler and reaching the field resolver.

We can revert to schema-first approach of the previous example directory to avoid confusion between what's pertinent to Apollo Server

ernestoresende and others added 5 commits May 20, 2023 13:27
Co-authored-by: Trevor Scheer <[email protected]>
Co-authored-by: Trevor Scheer <[email protected]>
Co-authored-by: Trevor Scheer <[email protected]>
Co-authored-by: Trevor Scheer <[email protected]>
Co-authored-by: Trevor Scheer <[email protected]>
@ernestoresende
Copy link
Collaborator Author

I'm sorry this has been on hold for so long, that's on me. I meant for this to also include the integration test suites we have on most of other packages, but never got around to actually delving into it.

From what I've saw until now, release-please does not have a way of handling pre-releases with flags, but I can publish it manually with the desired pre-release flag (alpha or canary). Is it agreed that this should mark the launch of 1.0.0?

@trevor-scheer
Copy link
Contributor

All good! The good news is I got the integration tests running against the repo.

If you'd like, we can switch to changesets which I've really enjoyed using (and have had success with prereleases with). The changes it makes are a bit more explicit than using semantic commits to control versioning and it gives you the ability to separate changelog entries from commit messages. Not necessary though - you could just go for the 1.0.0 and follow up with any patches as needed 👍

@ernestoresende
Copy link
Collaborator Author

We can surely switch to it, have been using it on some personal stuff as well, and it does work well with prerelease publishing flows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants