-
Notifications
You must be signed in to change notification settings - Fork 2.6k
feat: Improve prompt #4027
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Improve prompt #4027
Conversation
|
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
| applicationID.value = applicationId | ||
| dialogVisible.value = true | ||
| originalUserInput.value = '' | ||
| chatMessages.value = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are no identified irregularities, potential issues, or optimization suggestions in the provided code snippet. The changes introduced to include applicationID and ensure it is used appropriately within the generatePrompt function are minor additions that do not alter the overall functionality of the component.
| return postStream(`${prefix}/workspace/${workspace_id}/application/${application_id}/model/${model_id}/prompt_generate`, data) | ||
| } | ||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The provided code snippet has a couple of issues:
-
Incorrect URL Construction: The original line constructs the URL in the incorrect form.
return postStream(`${prefix}/workspace/${workspace_id}/application/model/${model_id}/prompt_generate`, data)
Should be:
return postStream(`${prefix}/workspace/${workspace_id}/application/${application_id}/model/${model_id}/prompt_generate`, data)
-
Missing Return Type Annotation for
generate_prompt: Although typescript annotations are optional, it's considered good practice to include them for clarity. -
Unnecessary Use of Window Object: Assuming
window.MaxKB?.prefixis defined elsewhere, usingWindowGlobalScopemight be more appropriate depending on how you want this logic scoped.
Here is corrected and improved version:
import { Ref } from 'vue';
import { Result } from './your_module_results'; // Adjust with actual import path
interface MaxKB {
prefix?: string;
}
/**
* Creates an open promise based on an application ID.
*/
export const open: (application_id: string, loading?: Ref<boolean>) => Promise<Result<string>> = async(application_id, loading): Promise<Result<string>> => {
if (!loading) throw new Error('Loading must be supplied');
const prefix = (globalThis as Window & typeof global).MaxKB.prefix ?? '/admin';
try {
if (loading) loading.value = true; // Assuming loading.value is reactive
const response = await fetch(`${prefix}/admin/api/workspace/:${id}/app/apply_model/promptGenert`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(data),
});
if (!response.ok) {
console.error(`HTTP error! status: ${response.status}`);
throw new Error(response.statusText);
}
const result = await response.json();
if (!result.success) {
console.warn(result.message || 'Failed to generate prompt.');
throw new Error('Prompt generation failed.');
}
return result.data;
} catch (err) {
console.error(err);
throw err;
} finally {
if (loading) loading.value = false;
}
};
/**
* Generates optimization prompts.
* @param workspaceId
* @param modelId
* @param appId
* @param data Any additional request parameters or payload data
* @returns Response indicating success or failure along with generated prompt data.
*/
async function generate_prompt(workspaceId: string, modelId: string, appId: string, data: any) : Promise<any> {
const prefix = (globalThis as Window & typeof global).MaxKB.prefix ?? '/admin'
try {
let res = await fetch(`${prefix}/workspace/${workspaceId}/application/${appId}/model/${modelId}/prompt_generate`, {
method: "post",
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(data)
});
if(!res.ok){
console.error(`HTTP error! status: ${res.status}`);}
const jsonResult = await res.json()
if(jsonResult && !jsonResult.success){
console.warn(jsonResult.message || 'Failed to generate prompt.')
throw new Error('Prompt generation had a problem.')
}
return jsonResult.promptData;
}catch(errorMessage){
console.log(errorMessage);
throw errorMessage;}
}Note that I also used fetch() instead of postStream() which is a hypothetical utility function you may need to define to match your implementation details. Also ensure all imports like Ref, Result, etc., should be correctly resolved by their respective module names.
| return branch_list | ||
| .filter((item, i) => i < index) | ||
| .map((item) => item.height + 8) | ||
| .reduce((x,y) => x+y, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function get_up_index_height looks correct for calculating the height of ancestors up to a given index in an array. However, there is a minor typo that can be corrected:
-const get_up_index_height = (branch_lsit: Array<any>, index: number) => {
+const get_up_index_height = (branch_list: Array<any>, index: number) => {This should have no effect on its functionality.
Optimization Suggestions: There isn't much that could significantly optimize this function in terms of performance with the current implementation. The operations involved—filtering, mapping, and reducing—are already quite efficient for small arrays. If you anticipate dealing with very large datasets, consider parallelizing some parts of the code if it's applicable, although in most cases JavaScript engines will handle such tasks internally efficiently. Ensure that all other functions used within get_up_index_height also perform well and avoid unnecessary computations.
feat: Improve prompt