Skip to content

Commit dfa5ffc

Browse files
am-steadsabrowning1isaacmbrownSiaraMistCopilot
authored
[DO NOT MERGE]: Megabranch for "GitHub Spark" (public preview) (#55441)
Co-authored-by: Sam Browning <[email protected]> Co-authored-by: Isaac Brown <[email protected]> Co-authored-by: Siara <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: cmuto09 <[email protected]> Co-authored-by: hubwriter <[email protected]> Co-authored-by: Sarah Schneider <[email protected]> Co-authored-by: isaacmbrown <[email protected]> Co-authored-by: Kelsey Conophy <[email protected]>
1 parent 3b59285 commit dfa5ffc

17 files changed

+428
-0
lines changed
70.7 KB
Loading
96.9 KB
Loading
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
---
2+
title: About billing for GitHub Spark
3+
intro: 'Learn how {% data variables.product.prodname_spark %} is billed for users.'
4+
versions:
5+
feature: spark
6+
topics:
7+
- Copilot
8+
shortTitle: Billing for Spark
9+
---
10+
11+
{% data reusables.copilot.spark-business-intro %}
12+
13+
> [!NOTE]
14+
> {% data reusables.spark.preview-note-spark %}
15+
16+
## Billing for {% data variables.product.prodname_spark_short %} app creation
17+
18+
Each prompt consumes 4 premium requests, which draw from your plan's premium request allowance. If you or an administrator has set a budget for premium requests over your plan's allowance, additional premium requests beyond your plan’s included amount are billed at {% data variables.copilot.additional_premium_requests %} per request, meaning that one prompt to {% data variables.product.prodname_spark_short %} would cost **$0.16**. See [AUTOTITLE](/copilot/concepts/copilot-billing/understanding-and-managing-requests-in-copilot).
19+
20+
## Billing and limits for {% data variables.product.prodname_spark_short %} app deployment
21+
22+
You can publish apps created with {% data variables.product.prodname_spark_short %} to a deployment environment.
23+
24+
Deployed apps do not currently incur any charges. However, {% data variables.product.company_short %} currently **limits usage** of deployed sparks based on criteria including number of HTTP requests, data transfer, and storage.
25+
26+
* Limits apply to the billable owner, meaning if you own 10 deployed sparks, all 10 will count towards the limits.
27+
* When any limit is reached, the spark is unpublished for the rest of the billing period.
28+
29+
In the future, a new billing system will allow sparks to continue being deployed once a limit is reached, with additional usage charged to the spark's billable owner. {% data variables.product.company_short %} will publish the limits once they are confirmed following a testing period. This article will be updated when more details are available.
30+
31+
## Further reading
32+
33+
* [AUTOTITLE](/copilot/responsible-use-of-github-copilot-features/responsible-use-of-github-spark)
34+
* [AUTOTITLE](/copilot/tutorials/building-ai-app-prototypes)

content/copilot/concepts/copilot-billing/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ children:
1212
- /about-billing-for-individual-copilot-plans
1313
- /about-billing-for-github-copilot-in-your-organization
1414
- /about-billing-for-github-copilot-in-your-enterprise
15+
- /about-billing-for-github-spark
1516
redirect_from:
1617
- /managing-copilot/managing-copilot-as-an-individual-subscriber/billing-and-payments
1718
- /copilot/managing-copilot/understanding-and-managing-copilot-usage

content/copilot/concepts/copilot-billing/understanding-and-managing-requests-in-copilot.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,7 @@ The following {% data variables.product.prodname_copilot_short %} features can u
3939
| [{% data variables.product.prodname_copilot_short %} code review](/copilot/using-github-copilot/code-review/using-copilot-code-review) | When you assign {% data variables.product.prodname_copilot_short %} as a reviewer for a pull request, **one premium request** is used each time {% data variables.product.prodname_copilot_short %} posts comments to the pull request. |
4040
| [{% data variables.copilot.copilot_extensions_short %}](/copilot/building-copilot-extensions/about-building-copilot-extensions) | {% data variables.copilot.copilot_extensions_short %} uses **one premium request** per user prompt, multiplied by the model's rate. |
4141
| [{% data variables.copilot.copilot_spaces %}](/copilot/using-github-copilot/copilot-spaces/about-organizing-and-sharing-context-with-copilot-spaces) | {% data variables.copilot.copilot_spaces %} uses **one premium request** per user prompt, multiplied by the model's rate. |
42+
| [{% data variables.product.prodname_spark_short %}](/copilot/tutorials/building-ai-app-prototypes) | Each prompt to {% data variables.product.prodname_spark_short %} uses a fixed rate of **four premium requests**. |
4243

4344
## How do request allowances work per plan?
4445

content/copilot/get-started/github-copilot-features.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,10 @@ Organize and centralize relevant content—like code, docs, specs, and more—in
6969

7070
Create and manage collections of documentation to use as context for chatting with {% data variables.product.prodname_copilot_short %}. When you ask a question in {% data variables.copilot.copilot_chat_dotcom_short %} or in {% data variables.product.prodname_vscode_shortname %}, you can specify a knowledge base as the context for your question. See [AUTOTITLE](/copilot/customizing-copilot/managing-copilot-knowledge-bases).
7171

72+
### {% data variables.product.prodname_spark %} ({% data variables.release-phases.public_preview %})
73+
74+
Build and deploy full-stack applications using natural-language prompts that seamlessly integrate with the {% data variables.product.github %} platform for advanced development. See [AUTOTITLE](/copilot/tutorials/building-ai-app-prototypes).
75+
7276
## {% data variables.product.prodname_copilot %} features for administrators
7377

7478
The following features are available to organization and enterprise owners with a {% data variables.copilot.copilot_business_short %} or {% data variables.copilot.copilot_enterprise_short %} plan.

content/copilot/responsible-use-of-github-copilot-features/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,4 +18,5 @@ children:
1818
- /responsible-use-of-github-copilot-text-completion
1919
- /responsible-use-of-github-copilot-code-review
2020
- /responsible-use-of-copilot-coding-agent-on-githubcom
21+
- /responsible-use-of-github-spark
2122
---
Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
---
2+
title: Responsible use of GitHub Spark
3+
shortTitle: Spark
4+
intro: 'Learn how to use {% data variables.product.prodname_spark %} responsibly by understanding its purposes, capabilities, and limitations.'
5+
versions:
6+
feature: spark
7+
topics:
8+
- Copilot
9+
- AI
10+
type: rai
11+
---
12+
13+
{% data reusables.rai.spark-preview-note %}
14+
15+
## About {% data variables.product.prodname_spark %}
16+
17+
{% data variables.product.prodname_spark_short %} is a {% data variables.product.prodname_copilot_short %}-powered platform for creating and sharing applications (“sparks”) that can be tailored to individual needs and accessed seamlessly across desktop and mobile devices \- without requiring users to write or deploy code.
18+
19+
{% data variables.product.prodname_spark_short %} offers a natural language centric development environment for application creation and a fully managed runtime environment that scales with your sparks’ needs. {% data variables.product.prodname_spark_short %} eliminates the need to manually manage infrastructure or stitch together multiple tools, letting you focus on building.
20+
21+
### Input processing
22+
23+
> [!NOTE] {% data variables.product.prodname_spark_short %} currently leverages {% data variables.copilot.copilot_claude_sonnet_40 %}. This model is subject to change.
24+
25+
Input prompts in {% data variables.product.prodname_spark_short %} are pre-processed by {% data variables.product.prodname_copilot_short %}, augmented with contextual information from your current {% data variables.product.prodname_spark_short %} inputs and sent to a large language model powered agent within your development environment. Included context includes information from your spark such as code from your current application, previous prompts supplied in the {% data variables.product.prodname_spark_short %} interface, and any error logs from your spark’s development environment.
26+
27+
The system is only designed to generate code based on submitted prompts. It is not capable of conversational interactions. English is the preferred language for submitted prompts.
28+
29+
### Language model analysis
30+
31+
The prompt is then passed through a large language model, which is a neural network that has been trained on a large body of text data. The language model analyzes the input prompt to help the agent reason on the task and leverage necessary tools.
32+
33+
### Agent execution
34+
35+
The agent which runs in your development environment accepts your prompt and the additional context passed, and decides how to update your spark to satisfy your request. The agent is able to operate your development environment by writing code, running commands, and reading execution outputs. All of the actions taken by the agent are to ensure functional, accurate code to execute your prompt. The only output from the agent is your application code.
36+
37+
### {% data variables.product.prodname_spark_short %} frameworks
38+
39+
The {% data variables.product.prodname_spark_short %} agent is trained to use frameworks and SDKs supplied by {% data variables.product.prodname_spark_short %} that ensure modern design and secure deployments seamlessly integrated into {% data variables.product.prodname_spark_short %}’s runtime component. The design framework is designed to be flexible and modular, enabling you to easily modify the theme to match your desired look and feel. {% data variables.product.prodname_spark_short %}’s runtime integration, accessible via the SDK, uses best practices for web deployments to ensure secure, scalable deployments.
40+
41+
### Adding inference capabilities to your spark
42+
43+
{% data variables.product.prodname_spark_short %}’s SDK natively integrates with {% data variables.product.prodname_github_models %}, allowing you to incorporate model inference into your spark. If {% data variables.product.prodname_spark_short %} determines that your application requires inference capabilities, it will add them using the {% data variables.product.prodname_spark_short %} SDK.
44+
45+
{% data variables.product.prodname_spark_short %} gives you the tools to create, modify, and test the prompts that will be used with these inference capabilities. {% data variables.product.prodname_spark_short %} does not do any testing of the prompts that you create within your application, so you must ensure that your included capabilities act as intended. For more information on responsible use within {% data variables.product.prodname_github_models %}, see the [AUTOTITLE](/github-models/responsible-use-of-github-models).
46+
47+
### Data processing
48+
49+
{% data variables.product.prodname_spark_short %} collects data to operate the service - this includes prompts, suggestions, and code snippets necessary to ensuring continuity between sessions. {% data variables.product.prodname_spark_short %} also collects additional usage information including usage patterns, submitted feedback, and performance telemetry.
50+
51+
## Use cases for {% data variables.product.prodname_spark_short %}
52+
53+
### Building and deploying full stack web applications
54+
55+
You can use {% data variables.product.prodname_spark_short %} to build full stack web applications for you using natural language. {% data variables.product.prodname_spark_short %}’s integrated runtime environment allows you to deploy these applications to the public internet. You can define permissions to these deployed applications based on {% data variables.product.github %} account visibility, allowing them to be visible to the general public, specific {% data variables.product.github %} members, members of your team or organization, or just you. Sparks can be anything \- from board game score trackers to full software-as-a-service products \- however whatever you deploy remains subject to {% data variables.product.github %}’s [Terms](/free-pro-team@latest/site-policy/github-terms/github-terms-for-additional-products-and-features#github-copilot) for user generated content.
56+
57+
### Prototyping ideas
58+
59+
{% data variables.product.prodname_spark_short %} helps developers, designers, product managers, or other builders rapidly prototype ideas without needing to build applications from scratch or construct complex mockups. These prototypes can be deployed for ease of sharing, or can remain unpublished as a way for builders to instantly see their vision.
60+
61+
## Improving performance for {% data variables.product.prodname_spark_short %}
62+
63+
{% data variables.product.prodname_spark_short %} can build a wide variety of applications, and iterate on them over time to increase complexity as new requirements are surfaced. To enhance performance and address some limitations of {% data variables.product.prodname_spark_short %}, there are various best practices you can adopt. For more information about the limitations of {% data variables.product.prodname_spark_short %}, see [Limitations of {% data variables.product.prodname_spark_short %}](#limitations-of-github-spark).
64+
65+
### Keep your prompts specific and on topic
66+
67+
{% data variables.product.prodname_spark_short %} is intended to build and iterate on your spark. The more specific you can be about the intended behaviors and interactions, the better the output will be from {% data variables.product.prodname_spark_short %}. Incorporating relevant context such as specific scenarios, mockups, or specifications will help {% data variables.product.prodname_spark_short %} understand your intent, which will improve the output you receive.
68+
69+
{% data variables.product.prodname_spark_short %} also incorporates context from previous prompts into each subsequent revision it generates. Submitting off-topic prompts may hinder performance on subsequent revisions. Therefore try to keep your prompts as relevant as possible to the application you are building.
70+
71+
### Use targeted edits appropriately
72+
73+
Targeted edits in {% data variables.product.prodname_spark_short %} allow you to specify elements within your application, letting you refine style, substance, or behavior of individual elements of your application. These targeted edits are an excellent way to constrain edit surface area and express intent to {% data variables.product.prodname_spark_short %}. Using targeted edits when possible (rather than global prompts) will result in more accurate changes, as well as fewer side effects in your application as {% data variables.product.prodname_spark_short %} generates new revisions.
74+
75+
### Verify {% data variables.product.prodname_spark_short %}’s output
76+
77+
While {% data variables.product.prodname_spark_short %} is an extremely powerful tool, it may still make mistakes. These mistakes can be misunderstandings of your goals, or more simple syntax errors within your generated spark. You should always use {% data variables.product.prodname_spark_short %}’s provided application preview to verify that your spark behaves as intended in different scenarios. If you are comfortable with code, it is also best practice to ensure the generated code is up to your code quality standards.
78+
79+
## Limitations of GitHub Spark
80+
81+
### Interpretation of user intent
82+
83+
{% data variables.product.prodname_spark_short %} is not always correct in its interpretation of your intent. You should always use {% data variables.product.prodname_spark_short %}’s provided preview to confirm accurate behavior within your spark.
84+
85+
### Limited scope
86+
87+
{% data variables.product.prodname_spark_short %} is backed by {% data variables.product.prodname_copilot_short %}, and therefore has been trained on a large body of code and relevant applications. However it may still struggle with complex or truly novel applications. {% data variables.product.prodname_spark_short %} will perform best on common/personal application scenarios (e.g. productivity tools, learning aids, life management utilities), and when the natural language instruction is provided in English.
88+
89+
### Security limitations
90+
91+
While {% data variables.product.prodname_spark_short %}’s runtime follows best practices for application deployment, it does generate code probabilistically, which can potentially introduce vulnerabilities especially if those vulnerabilities are common in the training set of applications. You should be careful when building applications that manage personal or sensitive data and always review and test the generated application thoroughly.
92+
93+
### Legal and regulatory considerations
94+
95+
Users need to evaluate potential specific legal and regulatory obligations when using any AI services and solutions, which may not be appropriate for use in every industry or scenario. Additionally, AI services or solutions are not designed for and may not be used in ways prohibited in applicable terms of service and relevant codes of conduct.
96+
97+
### Offensive content
98+
99+
{% data variables.product.prodname_spark_short %} has built-in protections against harmful, hateful, or offensive content. Please report any examples of offensive content to [email protected]. Please include your spark’s URL so that we can identify the spark.
100+
101+
You can report problematic or illegal content via Feedback, or you can report a spark as abuse or spam. See [AUTOTITLE](/communities/maintaining-your-safety-on-github/reporting-abuse-or-spam) and {% data variables.product.github %}'s [Content Removal Policies](/free-pro-team@latest/site-policy/content-removal-policies).
102+
103+
## Further Reading
104+
105+
* [AUTOTITLE](/copilot/tutorials/building-your-first-app-in-minutes-with-github-spark)
106+
* [AUTOTITLE](/copilot/tutorials/building-ai-app-prototypes)
107+
* [AUTOTITLE](/copilot/concepts/copilot-billing/about-billing-for-github-spark)
108+
* [AUTOTITLE](/github-models/responsible-use-of-github-models)
109+
* [AUTOTITLE](/free-pro-team@latest/site-policy/github-terms/github-pre-release-license-terms)

0 commit comments

Comments
 (0)