Skip to content

Releases: pinecone-io/go-pinecone

Release v5.4.0

19 Feb 19:52

Choose a tag to compare

This release adds support for creating and configuring index ReadCapacity for BYOC indexes. CreateBYOCIndexRequest now includes ReadCapacity and will properly apply the configuration when calling Client.CreateBYOCIndex or Client.ConfigureIndex:

pc, err := pinecone.NewClient(pinecone.NewClientParams{ApiKey: "YOUR_API_KEY"})
if err != nil {
    log.Fatal(err)
}

dimension := int32(1536)
metric := pinecone.Cosine
nodeType := "t1"
replicas, shards := int32(1), int32(1)

idx, err := pc.CreateBYOCIndex(ctx, &pinecone.CreateBYOCIndexRequest{
    Name:        "my-byoc-index",
    Environment: "my-environment",
    Dimension:   &dimension,
    Metric:      &metric,
    ReadCapacity: &pinecone.ReadCapacityParams{
        Dedicated: &pinecone.ReadCapacityDedicatedConfig{
            NodeType: &nodeType,
            Scaling: &pinecone.ReadCapacityScaling{
                Manual: &pinecone.ReadCapacityManualScaling{
                    Replicas: &replicas,
                    Shards:   &shards,
                },
            },
        },
    },
})

if err != nil {
    log.Fatal(err)
}
fmt.Println("Created BYOC index:", idx.Name)

Support for ScanFactor and MaxCandidates have also been added to QueryByVectorIdRequest and QueryByVectorValuesRequest. These can be used in IndexConnection.QueryByVectorValues and IndexConnection.QueryByVectorId. This parameter is only supported for dedicated (DRN) dense indexes.

idxConn, err := pc.Index(pinecone.NewIndexConnParams{Host: idx.Host})
if err != nil {
    log.Fatal(err)
}

scanFactor := float32(2.0)
maxCandidates := uint32(500)

// Query by vector values
res, err := idxConn.QueryByVectorValues(ctx, &pinecone.QueryByVectorValuesRequest{
    Vector:        []float32{0.1, 0.2, 0.3},
    TopK:          10,
    ScanFactor:    &scanFactor,
    MaxCandidates: &maxCandidates,
})
if err != nil {
    log.Fatal(err)
}

// Query by vector ID
res, err = idxConn.QueryByVectorId(ctx, &pinecone.QueryByVectorIdRequest{
    VectorId:      "my-vector-id",
    TopK:          10,
    ScanFactor:    &scanFactor,
    MaxCandidates: &maxCandidates,
})

if err != nil {
    log.Fatal(err)
}

What's Changed

New Contributors

Full Changelog: v5.3.0...v5.4.0

Release v5.3.0

15 Dec 23:56

Choose a tag to compare

Resolves missing API fields in client interface types:

  • NamespaceDescription:
    • Schema and IndexedFields will now be properly returned on CreateNamespace and ListNamespaces calls in IndexConnection
  • EmbedResponse
    • VectorType added
  • DescribeIndexStatsResponse
    • Metric, VectorType, MemoryFullness, StorageFullness added

What's Changed

  • Return all fields for NamespaceDescription on CreateNamespace call by @austin-denoble in #132
  • Fix ListNamespaces responses, centralize NamespaceDescription conversion by @austin-denoble in #133
  • chore: fix inaccurate comments in ReadCapacityDedicatedConfig struct by @changgesi in #131
  • Update EmbedResponse and DescribeIndexStatsResponse by @austin-denoble in #134

New Contributors

Full Changelog: v5.2.0...v5.3.0

Release v5.2.0

08 Dec 22:18

Choose a tag to compare

v5.2.0 resolves problems when using Client.ConfigureIndex to adjust an index's dedicated read node configuration via ReadCapacityParams. Previously, all fields were required in ReadCapacityDedicatedConfig and ReadCapacityScaling, which meant if you were trying to configure a dedicated index to update individual NodeType, Shards, or Replicas, the JSON marshaled would contain zero-values. These fields have been updated to pointers to represent optionality across usage in Client.CreateIndex, and Client.ConfigureIndex. These fields are now represented as pointers in the ReadCapacity responses for index models elsewhere.

Additionally, ReadCapacityParams also includes an explicit OnDemand field, and ReadCapacityOnDemandConfig has been added. These enable converting from an existing dedicated index to on demand. The following example creates an on demand index, converts to dedicated, and then adjusts the dedicated configuration.

import (
	"context"
	"fmt"
	"log"

	"github.com/pinecone-io/go-pinecone/v5/pinecone"
)

func ConfigureDedicatedReadNodes() {
	ctx := context.Background()

	pc, err := pinecone.NewClient(pinecone.NewClientParams{
		ApiKey: "YOUR_API_KEY",
	})
	if err != nil {
		log.Fatalf("Failed to create Client: %v", err)
	}

	// Create the serverless index with on demand read capacity (default)
	dimension := int32(1024)
	metric := pinecone.Cosine
	idx, err := pc.CreateServerlessIndex(ctx, &pinecone.CreateServerlessIndexRequest{
		Name:      "my-serverless-index",
		Dimension: &dimension,
		Metric:    &metric,
		Cloud:     pinecone.Aws,
		Region:    "us-east-1",
	})
	if err != nil {
		log.Fatalf("Failed to create serverless index: %v", err)
	}
	fmt.Printf("Successfully created index: %s\n", idx.Name)

	// Convert the index to dedicated
	nodeType := "t1"
	shards := int32(1)
	replicas := int32(1)
	updatedIdx, err := pc.ConfigureIndex(ctx, "my-serverless-index", pinecone.ConfigureIndexParams{
		ReadCapacity: &pinecone.ReadCapacityParams{
			Dedicated: &pinecone.ReadCapacityDedicatedConfig{
				NodeType: &nodeType,
				Scaling: &pinecone.ReadCapacityScaling{
					Manual: &pinecone.ReadCapacityManualScaling{
						Replicas: &replicas,
						Shards:   &shards,
					},
				},
			},
		},
	})
	if err != nil {
		log.Fatalf("Failed to configure index: %v", err)
	}
	fmt.Printf("Successfully updated index %s read capacity. Status: %+v\n", "my-serverless-index",
		updatedIdx.Spec.Serverless.ReadCapacity.Dedicated)

	// Update dedicated index read capacity configuration
	newShards := int32(2)
	newReplicas := int32(2)
	updatedIdx, err = pc.ConfigureIndex(ctx, "my-serverless-index", pinecone.ConfigureIndexParams{
		ReadCapacity: &pinecone.ReadCapacityParams{
			Dedicated: &pinecone.ReadCapacityDedicatedConfig{
				Scaling: &pinecone.ReadCapacityScaling{
					Manual: &pinecone.ReadCapacityManualScaling{
						Replicas: &newReplicas,
						Shards:   &newShards,
					},
				},
			},
		},
	})
	if err != nil {
		log.Fatalf("Failed to configure index: %v", err)
	}
	fmt.Printf("Successfully updated index %s read capacity. Status: %+v\n", "my-serverless-index",
		updatedIdx.Spec.Serverless.ReadCapacity.Dedicated)
}

Release v5.1.0

03 Dec 18:42

Choose a tag to compare

This release addresses a bug in the BYOC index creation flow. In CreateBYOCIndexRequest, the VectorType field was not exposed. CreateBYOCIndexRequest.Dimension was also typed as an int32 value rather than a pointer, which prevents optionality when passing Dimension for sparse and dense indexes.

NOTE: Usage of CreateBYOCIndexRequest.Dimension will need to be updated from a value to a pointer at the Client.CreateBYOCIndex() call site.

What's Changed

Full Changelog: v5.0.0...v5.1.0

Release v5.0.0

11 Nov 04:20

Choose a tag to compare

This version of the Pinecone Go SDK depends on version 2025-10 of the Pinecone API. You can read more about versioning here. This v5 SDK release line should continue to receive fixes as long as the 2025-10 API version is in support.

Features

Dedicated Read Nodes

⚠️ Note: This feature is in early access and not yet available to all users. To request access, contact support.

Dedicated read nodes is a new feature that lets you reserve dedicated storage and compute resources for an index, ensuring predictable performance and cost efficiency for queries. It is ideal for workloads with millions to billions of records and moderate to high query rates.

You can now configure dedicated read capacity for serverless indexes to provide consistent performance and higher throughput. ReadCapacityParams and ReadCapacityDedicatedConfig can be used in CreateServerlessIndexRequest, CreateIndexForModelRequest, and ConfigureIndexParams for creating and configuring indexes with dedicated read nodes.

import (
	"context"
	"fmt"
	"log"

	"github.com/pinecone-io/go-pinecone/v5/pinecone"
)

func DedicatedReadNodes() {
	ctx := context.Background()

	pc, err := pinecone.NewClient(pinecone.NewClientParams{
		ApiKey: "YOUR_API_KEY",
	})
	if err != nil {
		log.Fatalf("Failed to create Client: %v", err)
	}

	dimension := int32(128)
	metric := pinecone.Cosine

	// Configure dedicated read capacity
	readCapacity := &pinecone.ReadCapacityParams{
		Dedicated: &pinecone.ReadCapacityDedicatedConfig{
			NodeType: "t1",
			Scaling: &pinecone.ReadCapacityScaling{
				Manual: &pinecone.ReadCapacityManualScaling{
					Replicas: 1,
					Shards:   1,
				},
			},
		},
	}

	// Create the serverless index with dedicated read capacity
	idx, err := pc.CreateServerlessIndex(ctx, &pinecone.CreateServerlessIndexRequest{
		Name:        "my-serverless-index",
		Dimension:   &dimension,
		Metric:      &metric,
		Cloud:       pinecone.Aws,
		Region:      "us-east-1",
		ReadCapacity: readCapacity,
	})
	if err != nil {
		log.Fatalf("Failed to create serverless index: %v", err)
	}

	fmt.Printf("Successfully created index: %s\n", idx.Name)

	// Update the read capacity configuration
	updatedReadCapacity := &pinecone.ReadCapacityParams{
		Dedicated: &pinecone.ReadCapacityDedicatedConfig{
			NodeType: "t1",
			Scaling: &pinecone.ReadCapacityScaling{
				Manual: &pinecone.ReadCapacityManualScaling{
					Replicas: 2,  // Scale up replicas
					Shards:   2,  // Scale up shards
				},
			},
		},
	}

	// Configure the index with updated read capacity
	updatedIdx, err := pc.ConfigureIndex(ctx, "my-serverless-index", pinecone.ConfigureIndexParams{
		ReadCapacity: updatedReadCapacity,
	})
	if err != nil {
		log.Fatalf("Failed to configure index: %v", err)
	}

	fmt.Printf("Successfully updated index read capacity. Status: %s\n", 
		updatedIdx.Spec.Serverless.ReadCapacity.Dedicated.Status.State)
}

Index Schema

Pinecone indexes all metadata fields by default. However, large amounts of metadata can cause slower index building as well as slower query execution, particularly when data is not cached in a query executor’s memory and local SSD and must be fetched from object storage.
To prevent performance issues due to excessive metadata, you can limit metadata indexing to the fields that you plan to use for query filtering.

You can now define metadata schemas for serverless indexes to control which metadata fields are indexed. MetadataSchema and MetadataSchemaField have been added to specify which metadata fields should be filterable. The schema can be applied when creating a new index using CreateServerlessIndexRequest, CreateIndexForModelRequest, or CreateBYOCIndexRequest, and can also be applied to individual namespaces when creating them.

import (
	"context"
	"fmt"
	"log"

	"github.com/pinecone-io/go-pinecone/v5/pinecone"
)

func IndexSchema() {
	ctx := context.Background()

	pc, err := pinecone.NewClient(pinecone.NewClientParams{
		ApiKey: "YOUR_API_KEY",
	})
	if err != nil {
		log.Fatalf("Failed to create Client: %v", err)
	}

	dimension := int32(128)
	metric := pinecone.Cosine

	// Define a schema for metadata indexing
	schema := &pinecone.MetadataSchema{
		Fields: map[string]pinecone.MetadataSchemaField{
			"category": {Filterable: true},
			"rating":   {Filterable: true},
			"year":     {Filterable: true},
		},
	}

	// Create the serverless index with schema
	idx, err := pc.CreateServerlessIndex(ctx, &pinecone.CreateServerlessIndexRequest{
		Name:      "my-serverless-index",
		Dimension: &dimension,
		Metric:    &metric,
		Cloud:     pinecone.Aws,
		Region:    "us-east-1",
		Schema:    schema,
	})
	if err != nil {
		log.Fatalf("Failed to create serverless index: %v", err)
	}

	fmt.Printf("Successfully created index: %s\n", idx.Name)
}

Fetch and Update Vectors by Metadata

You can now update and fetch vectors using metadata filters. UpdateVectorsByMetadata allows you to update all vectors matching a metadata filter in a single operation, while FetchVectorsByMetadata allows you to retrieve vectors that match specific metadata criteria with support for pagination.

import (
	"context"
	"fmt"
	"log"

	"github.com/pinecone-io/go-pinecone/v5/pinecone"
)

func UpdateAndFetchByMetadata() {
	ctx := context.Background()

	pc, err := pinecone.NewClient(pinecone.NewClientParams{
		ApiKey: "YOUR_API_KEY",
	})
	if err != nil {
		log.Fatalf("Failed to create Client: %v", err)
	}

	idx, err := pc.DescribeIndex(ctx, "my-serverless-index")
	if err != nil {
		log.Fatalf("Failed to describe index: %v", err)
	}

	idxConnection, err := pc.Index(pinecone.NewIndexConnParams{
		Host:      idx.Host,
		Namespace: "my-custom-namespace",
	})
	if err != nil {
		log.Fatalf("Failed to create IndexConnection: %v", err)
	}

	// Update vectors by metadata filter
	// Update all vectors with category "electronics" and rating >= 4.0
	filterMap := map[string]interface{}{
		"$and": []interface{}{
			map[string]interface{}{
				"category": map[string]interface{}{"$eq": "electronics"},
			},
			map[string]interface{}{
				"rating": map[string]interface{}{"$gte": 4.0},
			},
		},
	}

	filter, err := pinecone.NewMetadataFilter(filterMap)
	if err != nil {
		log.Fatalf("Failed to create metadata filter: %v", err)
	}

	// Update metadata for matching vectors
	updateMetadata, err := pinecone.NewMetadata(map[string]interface{}{
		"category": "electronics",
		"rating":   5.0,  // Update rating to 5.0
		"tags":     []string{"popular", "trending", "featured"},  // Add new tag
	})
	if err != nil {
		log.Fatalf("Failed to create update metadata: %v", err)
	}

	updateRes, err := idxConnection.UpdateVectorsByMetadata(ctx, &pinecone.UpdateVectorsByMetadataRequest{
		Filter:   filter,
		Metadata: updateMetadata,
	})
	if err != nil {
		log.Fatalf("Failed to update vectors by metadata: %v", err)
	}

	fmt.Printf("Updated %d vector(s) matching the filter\n", updateRes.MatchedRecords)

	// Fetch vectors by metadata filter
	// Fetch all vectors with category "electronics" and rating >= 4.5
	fetchFilterMap := map[string]interface{}{
		"$and": []interface{}{
			map[string]interface{}{
				"category": map[string]interface{}{"$eq": "electronics"},
			},
			map[string]interface{}{
				"rating": map[string]interface{}{"$gte": 4.5},
			},
		},
	}

	fetchFilter, err := pinecone.NewMetadataFilter(fetchFilterMap)
	if err != nil {
		log.Fatalf("Failed to create fetch filter: %v", err)
	}

	limit := uint32(10)
	fetchRes, err := idxConnection.FetchVectorsByMetadata(ctx, &pinecone.FetchVectorsByMetadataRequest{
		Filter: fetchFilter,
		Limit:  &limit,
	})
	if err != nil {
		log.Fatalf("Failed to fetch vectors by metadata: %v", err)
	}

	fmt.Printf("Fetched %d vector(s) matching the filter:\n", len(fetchRes.Vectors))
	for id, vector := range fetchRes.Vectors {
		fmt.Printf("  Vector ID: %s, Metadata: %v\n", id, vector.Metadata)
	}
}

Create Namespace

You can now explicitly create namespaces within serverless indexes using CreateNamespace. Previously, namespaces were created implicitly when calling upsert operations against a specific namespace. When creating a namespace, you can optionally specify a metadata schema that can differ from the index-level schema. Additionally, Prefix has been added to ListNamespacesParams for filtering namespaces by prefix, and TotalCount has been added to ListNamespacesResponse to show the total number of namespaces matching the prefix.

import (
	"context"
	"fmt"
	"log"

	"github.com/pinecone-io/go-pinecone/v5/pinecone"
)

func CreateNamespace() {
	ctx := context.Background()

	pc, err := pinecone.NewClient(pinecone.NewClientParams{
		ApiKey: "YOUR_API_KEY",
	})
	if err != nil {
		log.Fatalf("Failed to create Client: %v", err)
	}

	idx, err := pc.DescribeIndex(ctx, "my-serverless-index")
	if err != nil {
		log.Fatalf("Failed to describe index: %v", err)
	}

	idxConnection, err := pc.Index(pinecone.NewIndexConnParams{Host: idx.Host})
	if err != nil {
		log.Fatalf("Failed to create IndexConnection: %v", err)
	}

	// Define a slightly different schema for the new namespace
	// (e.g., adding a new field, removing a field, or keeping some fields)
	namespaceSchema := &pinecone.MetadataSchema{
		Fields: map[string]pinecone.MetadataSc...
Read more

Release v4.1.4

07 Aug 15:37

Choose a tag to compare

What's Changed

Full Changelog: v4.1.3...v4.1.4

Release v4.1.3

04 Aug 13:01

Choose a tag to compare

This patch release allows passing AccessToken as a part of NewAdminClientParams in lieu of ClientId and ClientSecret. It also refactors exports and documentation around AdminClient internals.

What's Changed

Full Changelog: v4.1.2...v4.1.3

Release v4.1.2

22 Jul 04:48

Choose a tag to compare

ServerlessSpec and CreateServerlessIndexRequest structs now support SourceCollection.

What's Changed

Full Changelog: v4.1.1...v4.1.2

Release v4.1.1

18 Jul 15:53

Choose a tag to compare

The Import object should now properly contain PercentComplete and RecordsImported.

What's Changed

  • Add PercentComplete and RecordsImported fields to Import object construction by @vadimpanin in #118

New Contributors

Full Changelog: v4.1.0...v4.1.1

Release v4.1.0

13 Jul 18:48

Choose a tag to compare

Features

Pinecone Admin API

The pinecone package now contains structs and functions for working with Pinecone's administrative APIs to manage projects, API keys, and organizations programmatically. A prerequisite for working with the admin APIs is to have a service account. To create a service
account, visit the Pinecone web console and navigate to the Access > Service Accounts section.

This releases exposes a new AdminClient, and functions for instantiating and authenticating the client: NewAdminClient or NewAdminClientWithContext. AdminClient contains interfaces allowing you to work with Projects, API Keys, and Organizations.

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/pinecone-io/go-pinecone/v4/pinecone"
)

func main() {
	ctx := context.Background()

	// Create an AdminClient using your service account credentials
	adminClient, err := pinecone.NewAdminClient(pinecone.NewAdminClientParams{
		ClientId:     "YOUR_CLIENT_ID",
		ClientSecret: "YOUR_CLIENT_SECRET",
	})
	if err != nil {
		log.Fatalf("failed to create AdminClient: %v", err)
	}

	// Create a new project
	project, err := adminClient.Project.Create(ctx, &pinecone.CreateProjectParams{
		Name: "example-project",
	})
	if err != nil {
		log.Fatalf("failed to create project: %v", err)
	}
	fmt.Printf("Created project: %s\n", project.Name)

	// Create a new API within that project
	apiKey, err := adminClient.APIKey.Create(ctx, project.Id, &pinecone.CreateAPIKeyParams{
		Name: "example-api-key",
	})
	if err != nil {
		log.Fatalf("failed to create API key: %v", err)
	}
	fmt.Printf("Created API key: %s\n", apiKey.Id)

	// List all projects
	projects, err := adminClient.Project.List(ctx)
	if err != nil {
		log.Fatalf("failed to list projects: %v", err)
	}
	fmt.Printf("You have %d project(s)\n", len(projects))

	// List API keys for the created project
	apiKeys, err := adminClient.APIKey.List(ctx, project.Id)
	if err != nil {
		log.Fatalf("failed to list API keys: %v", err)
	}
	fmt.Printf("Project '%s' has %d API key(s)\n", project.Name, len(apiKeys))
}

What's Changed

Full Changelog: v4.0.1...v4.1.0