Skip to content

Commit e7a061a

Browse files
ooplesclaude
andauthored
fix: resolve 6 unresolved pr review comments from #424 (#486)
* fix: correct onnx attributeproto field numbers per spec Changed field numbers to match ONNX protobuf specification: - Field 20 for type (was field 3) - Field 3 for int value (was field 4) - Field 2 for float value (was field 5) - Field 4 for string value (was field 6) - Field 8 for repeated ints (unchanged, was correct) This prevents corrupt ONNX attributes when exporting models. Fixes critical code review issue #4 from PR #424. Generated with Claude Code Co-Authored-By: Claude <[email protected]> * fix: preserve coreml-specific configuration during export CoreMLExporter was converting CoreMLConfiguration to generic ExportConfiguration, losing CoreML-specific settings like ComputeUnits, MinimumDeploymentTarget, SpecVersion, InputFeatures, OutputFeatures, and FlexibleInputShapes. This fix: - Stores original CoreMLConfiguration in PlatformSpecificOptions during ExportToCoreML - Retrieves preserved configuration in ConvertOnnxToCoreML - Falls back to creating default config for backward compatibility Addresses PR #424 review comment: exporter drops CoreML-specific configuration * fix: add explicit null guard for directory creation Added production-ready null handling for Path.GetDirectoryName edge cases: - Explicit null check before directory operations - Changed IsNullOrEmpty to IsNullOrWhiteSpace for better validation - Added clarifying comments about edge cases (root paths, relative filenames) - Documented fallback behavior when directory is null/empty Addresses PR #424 review comment: null directory edge case handling * fix: use constraint-free hash computation in modelcache Replaced Marshal.SizeOf/Buffer.BlockCopy hashing with GetHashCode-based approach: - Removed requirement for T : unmanaged constraint - Uses unchecked hash combining with prime multipliers (17, 31) - Samples large arrays (max 100 elements) for performance - Includes array length and last element for better distribution - Proper null handling for reference types This allows ModelCache to work with any numeric type without cascading constraint requirements through DeploymentRuntime, PredictionModelResult, and dozens of other classes. Addresses PR #424 review comment: ModelCache T constraint for hashing semantics * fix: correct event ordering in telemetrycollector getevents Fixed incorrect ordering logic where Take(limit) was applied before OrderByDescending(timestamp), causing arbitrary events to be returned instead of the most recent ones. Changed: - _events.Take(limit).OrderByDescending(e => e.Timestamp) To: - _events.OrderByDescending(e => e.Timestamp).Take(limit) This ensures the method returns the MOST RECENT events as intended, not random events from the ConcurrentBag. Added clarifying documentation explaining the fix and return value semantics. Addresses PR #424 review comment: GetEvents ordering issue * fix: add comprehensive validation for tensorrt configuration Added production-ready validation to prevent invalid TensorRT configurations: 1. ForInt8() method validation: - Throws ArgumentNullException if calibration data path is null/whitespace - Ensures INT8 configurations always have calibration data 2. New Validate() method checks: - INT8 enabled requires non-empty CalibrationDataPath - Calibration data file exists if path is provided - MaxBatchSize >= 1 - MaxWorkspaceSize >= 0 - BuilderOptimizationLevel in valid range [0-5] - NumStreams >= 1 when EnableMultiStream is true This prevents runtime failures from misconfigured TensorRT engines, especially the critical INT8 without calibration data scenario. Addresses PR #424 review comment: TensorRTConfiguration calibration data validation * fix: add bounds checking for inputsize/outputsize casts in coreml proto Validate InputSize and OutputSize are non-negative before casting to ulong to prevent negative values from wrapping to large unsigned values in CoreML protobuf serialization. * fix: add production-ready onnx parsing with type validation and correct shape extraction This commit fixes three critical issues in ONNX→CoreML conversion: 1. **Data type validation in ParseTensor**: Now reads and validates the data_type field (field 5), ensuring only FLOAT tensors are converted. Throws NotSupportedException for unsupported types (DOUBLE, INT8, etc.) instead of silently corrupting data. 2. **Correct TypeProto parsing**: Fixed ParseTypeProto to properly handle nested ONNX protobuf structure (TypeProto → tensor_type → shape → dim → dim_value) instead of incorrectly treating every varint as a dimension. This fixes tensor shape extraction for model inputs/outputs. 3. **Accurate InnerProduct layer sizing**: Changed from Math.Sqrt approximation (which assumed square matrices) to using actual tensor shape from ONNX dims. For MatMul/Gemm layers, correctly extracts [out_dim, in_dim] from weight tensor shape. Technical changes: - ParseTensor now returns OnnxTensor with Name, Data, and Shape fields - Added OnnxTensor class to store tensor metadata alongside float data - Updated OnnxGraphInfo.Initializers from Dictionary<string, float[]> to Dictionary<string, OnnxTensor> - Added ParseTensorTypeProto, ParseTensorShapeProto, and ParseDimensionProto helper methods - ConvertOperatorToLayer uses shape[0] and shape[1] for layer sizing with sqrt fallback * fix: preserve all configuration properties across cloning and deserialization This ensures deployment behavior, model adaptation capabilities, and training history are maintained when copying or reloading models. Updated three methods: 1. WithParameters: Now passes LoRAConfiguration, CrossValidationResult, AgentConfig, AgentRecommendation, and DeploymentConfiguration to constructor 2. DeepCopy: Same as WithParameters for consistency 3. Deserialize: Now assigns all RAG components (RagRetriever, RagReranker, RagGenerator, QueryProcessors) and configuration properties (LoRAConfiguration, CrossValidationResult, AgentConfig, AgentRecommendation, DeploymentConfiguration) from deserialized object This fixes the issue where deployment/export/runtime settings, LoRA configurations, and meta-learning properties were lost when calling WithParameters, DeepCopy, or Deserialize. * fix: correct onnx field numbers and address pr review comments CRITICAL: Fix ONNX TensorProto field number compliance: - OnnxProto.cs: Change field 3 → 8 for tensor name per ONNX spec - OnnxToCoreMLConverter.cs: Fix all TensorProto fields (1=dims, 2=data_type, 8=name, 9=raw_data) - Previous incorrect field numbers would cause empty tensor names and broken shape inference Additional fixes: - CoreMLExporter.cs: Fix QuantizationBits mapping (Int8→8, Float16→16, default→32) - TensorRTConfiguration.cs: Use ArgumentException instead of ArgumentNullException for whitespace validation - ModelExporterBase.cs: Remove redundant null check (IsNullOrWhiteSpace handles null) Addresses PR #486 review comments #1, #2, #4, #5, #6 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]> * style: use ternary operator for coreml config assignment Simplify CoreMLExporter.cs by using ternary conditional operator instead of if/else for CoreMLConfiguration assignment. Addresses PR #486 review comment #5 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]> * fix: replace gethashcode with sha256 for model cache correctness CRITICAL: Model caching requires cryptographically secure hashing to prevent hash collisions that would cause incorrect predictions. Previous GetHashCode() approach issues: - Hash collision probability ~2^-32 (unacceptable for ML inference) - Non-deterministic across .NET runtimes, machines, and process restarts - Sampled only 100 elements from large arrays (incomplete hashing) - Could return same cache entry for different inputs (silent data corruption) SHA256-based approach: - Collision probability ~2^-256 (cryptographically secure) - Deterministic and stable across all platforms and runtimes - Hashes ALL array elements for complete correctness - Ensures cached results always match the correct input Performance impact: SHA256 hashing adds microseconds, inference takes milliseconds/seconds - the overhead is negligible compared to model inference time. This fix prioritizes correctness over premature optimization. For production ML systems, silent data corruption from hash collisions is unacceptable. Addresses PR #486 review comment #3 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]> --------- Co-authored-by: Claude <[email protected]>
1 parent f6e4cb2 commit e7a061a

File tree

5 files changed

+191
-27
lines changed

5 files changed

+191
-27
lines changed

src/Deployment/Export/Onnx/OnnxProto.cs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -353,8 +353,8 @@ private static byte[] CreateTensorProto<T>(string name, Vector<T> data)
353353
writer.WriteTag(2, WireFormat.WireType.Varint);
354354
writer.WriteInt32(GetDataTypeValue(typeof(T)));
355355

356-
// Field 3: name
357-
writer.WriteTag(3, WireFormat.WireType.LengthDelimited);
356+
// Field 8: name (per ONNX TensorProto specification)
357+
writer.WriteTag(8, WireFormat.WireType.LengthDelimited);
358358
writer.WriteString(name);
359359

360360
// Field 9: raw_data

src/Deployment/Mobile/CoreML/CoreMLProto.cs

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -263,6 +263,12 @@ private static byte[] CreateInnerProductLayer(CoreMLLayer layer)
263263
using var stream = new MemoryStream();
264264
using var writer = new CodedOutputStream(stream);
265265

266+
// Validate layer sizes before casting to prevent negative values from wrapping to large unsigned values
267+
if (layer.InputSize < 0 || layer.OutputSize < 0)
268+
throw new ArgumentException(
269+
$"Layer '{layer.Name}' has invalid size: InputSize={layer.InputSize}, OutputSize={layer.OutputSize}. " +
270+
"Both must be non-negative for CoreML protobuf serialization.");
271+
266272
// Field 1: inputChannels
267273
writer.WriteTag(1, WireFormat.WireType.Varint);
268274
writer.WriteUInt64((ulong)layer.InputSize);

src/Deployment/Mobile/CoreML/OnnxToCoreMLConverter.cs

Lines changed: 154 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -75,8 +75,8 @@ private static void ParseGraph(byte[] graphBytes, OnnxGraphInfo graphInfo)
7575
break;
7676
case 5: // initializer (weights)
7777
var initBytes = reader.ReadBytes();
78-
var (name, weights) = ParseTensor(initBytes.ToByteArray());
79-
graphInfo.Initializers[name] = weights;
78+
var tensor = ParseTensor(initBytes.ToByteArray());
79+
graphInfo.Initializers[tensor.Name] = tensor;
8080
break;
8181
case 11: // input
8282
var inputBytes = reader.ReadBytes();
@@ -128,13 +128,15 @@ private static OnnxNode ParseNode(byte[] nodeBytes)
128128
return node;
129129
}
130130

131-
private static (string name, float[] weights) ParseTensor(byte[] tensorBytes)
131+
private static OnnxTensor ParseTensor(byte[] tensorBytes)
132132
{
133133
using var stream = new MemoryStream(tensorBytes);
134134
using var reader = new CodedInputStream(stream);
135135

136136
string name = string.Empty;
137137
float[] weights = Array.Empty<float>();
138+
int dataType = -1; // ONNX TensorProto.DataType: 1 = FLOAT, 11 = DOUBLE, etc.
139+
var dims = new List<long>();
138140

139141
while (!reader.IsAtEnd)
140142
{
@@ -143,20 +145,47 @@ private static (string name, float[] weights) ParseTensor(byte[] tensorBytes)
143145

144146
switch (fieldNumber)
145147
{
146-
case 3: // name
148+
case 1: // dims (repeated) - ONNX TensorProto field 1
149+
dims.Add(reader.ReadInt64());
150+
break;
151+
case 2: // data_type - ONNX TensorProto field 2
152+
dataType = reader.ReadInt32();
153+
break;
154+
case 8: // name - ONNX TensorProto field 8
147155
name = reader.ReadString();
148156
break;
149157
case 9: // raw_data
150158
var rawBytes = reader.ReadBytes().ToByteArray();
151-
weights = BytesToFloatArray(rawBytes);
159+
// Validate data type before conversion
160+
if (dataType == 1) // FLOAT (32-bit)
161+
{
162+
weights = BytesToFloatArray(rawBytes);
163+
}
164+
else if (dataType == -1)
165+
{
166+
// data_type field not yet encountered, assume float for backward compatibility
167+
weights = BytesToFloatArray(rawBytes);
168+
}
169+
else
170+
{
171+
throw new NotSupportedException(
172+
$"Tensor '{name}' has unsupported data type {dataType}. " +
173+
$"Only FLOAT (type 1) tensors are supported for ONNX→CoreML conversion. " +
174+
$"Common types: 1=FLOAT, 11=DOUBLE, 2=UINT8, 3=INT8, 6=INT32, 7=INT64.");
175+
}
152176
break;
153177
default:
154178
reader.SkipLastField();
155179
break;
156180
}
157181
}
158182

159-
return (name, weights);
183+
return new OnnxTensor
184+
{
185+
Name = name,
186+
Data = weights,
187+
Shape = dims.Select(d => (int)d).ToArray()
188+
};
160189
}
161190

162191
private static OnnxValueInfo ParseValueInfo(byte[] valueInfoBytes)
@@ -191,7 +220,7 @@ private static OnnxValueInfo ParseValueInfo(byte[] valueInfoBytes)
191220

192221
private static int[] ParseTypeProto(byte[] typeBytes)
193222
{
194-
// Simplified: extract shape dimensions from tensor type
223+
// Parse ONNX TypeProto structure: TypeProto → tensor_type → shape → repeated dim → dim_value
195224
var shape = new List<int>();
196225

197226
using var stream = new MemoryStream(typeBytes);
@@ -200,10 +229,12 @@ private static int[] ParseTypeProto(byte[] typeBytes)
200229
while (!reader.IsAtEnd)
201230
{
202231
var tag = reader.ReadTag();
203-
if (WireFormat.GetTagWireType(tag) == WireFormat.WireType.Varint)
232+
var fieldNumber = WireFormat.GetTagFieldNumber(tag);
233+
234+
if (fieldNumber == 1) // tensor_type (LengthDelimited)
204235
{
205-
var dim = (int)reader.ReadInt64();
206-
if (dim > 0) shape.Add(dim);
236+
var tensorTypeBytes = reader.ReadBytes().ToByteArray();
237+
shape = ParseTensorTypeProto(tensorTypeBytes);
207238
}
208239
else
209240
{
@@ -214,6 +245,88 @@ private static int[] ParseTypeProto(byte[] typeBytes)
214245
return shape.ToArray();
215246
}
216247

248+
private static List<int> ParseTensorTypeProto(byte[] tensorTypeBytes)
249+
{
250+
// Parse TensorTypeProto: field 1 = elem_type (skip), field 2 = shape
251+
var shape = new List<int>();
252+
253+
using var stream = new MemoryStream(tensorTypeBytes);
254+
using var reader = new CodedInputStream(stream);
255+
256+
while (!reader.IsAtEnd)
257+
{
258+
var tag = reader.ReadTag();
259+
var fieldNumber = WireFormat.GetTagFieldNumber(tag);
260+
261+
if (fieldNumber == 2) // shape (LengthDelimited)
262+
{
263+
var shapeBytes = reader.ReadBytes().ToByteArray();
264+
shape = ParseTensorShapeProto(shapeBytes);
265+
}
266+
else
267+
{
268+
reader.SkipLastField(); // Skip elem_type and unknown fields
269+
}
270+
}
271+
272+
return shape;
273+
}
274+
275+
private static List<int> ParseTensorShapeProto(byte[] shapeBytes)
276+
{
277+
// Parse TensorShapeProto: repeated field 1 = dim
278+
var dims = new List<int>();
279+
280+
using var stream = new MemoryStream(shapeBytes);
281+
using var reader = new CodedInputStream(stream);
282+
283+
while (!reader.IsAtEnd)
284+
{
285+
var tag = reader.ReadTag();
286+
var fieldNumber = WireFormat.GetTagFieldNumber(tag);
287+
288+
if (fieldNumber == 1) // dim (LengthDelimited, repeated)
289+
{
290+
var dimBytes = reader.ReadBytes().ToByteArray();
291+
var dimValue = ParseDimensionProto(dimBytes);
292+
if (dimValue > 0)
293+
{
294+
dims.Add(dimValue);
295+
}
296+
}
297+
else
298+
{
299+
reader.SkipLastField();
300+
}
301+
}
302+
303+
return dims;
304+
}
305+
306+
private static int ParseDimensionProto(byte[] dimBytes)
307+
{
308+
// Parse DimensionProto: field 1 = dim_value (Varint)
309+
using var stream = new MemoryStream(dimBytes);
310+
using var reader = new CodedInputStream(stream);
311+
312+
while (!reader.IsAtEnd)
313+
{
314+
var tag = reader.ReadTag();
315+
var fieldNumber = WireFormat.GetTagFieldNumber(tag);
316+
317+
if (fieldNumber == 1) // dim_value
318+
{
319+
return (int)reader.ReadInt64();
320+
}
321+
else
322+
{
323+
reader.SkipLastField(); // Skip dim_param and unknown fields
324+
}
325+
}
326+
327+
return 0;
328+
}
329+
217330
private static float[] BytesToFloatArray(byte[] bytes)
218331
{
219332
var floats = new float[bytes.Length / 4];
@@ -285,7 +398,7 @@ private static CoreMLNeuralNetwork ConvertNeuralNetwork(OnnxGraphInfo onnxGraph,
285398
return network;
286399
}
287400

288-
private static CoreMLLayer? ConvertOperatorToLayer(OnnxNode op, Dictionary<string, float[]> initializers, int layerIndex)
401+
private static CoreMLLayer? ConvertOperatorToLayer(OnnxNode op, Dictionary<string, OnnxTensor> initializers, int layerIndex)
289402
{
290403
var layer = new CoreMLLayer
291404
{
@@ -303,18 +416,31 @@ private static CoreMLNeuralNetwork ConvertNeuralNetwork(OnnxGraphInfo onnxGraph,
303416

304417
// Extract weights from initializers
305418
var weightsKey = op.Inputs.Count > 1 ? op.Inputs[1] : null;
306-
if (weightsKey != null && initializers.TryGetValue(weightsKey, out var weights))
419+
if (weightsKey != null && initializers.TryGetValue(weightsKey, out var weightsTensor))
307420
{
308-
layer.Weights = weights;
309-
layer.InputSize = weights.Length / (weights.Length > 0 ? (int)Math.Sqrt(weights.Length) : 1);
310-
layer.OutputSize = (int)Math.Sqrt(weights.Length);
421+
layer.Weights = weightsTensor.Data;
422+
423+
// Use actual tensor shape instead of sqrt approximation
424+
// ONNX weight matrices for MatMul/Gemm are typically [out_dim, in_dim]
425+
if (weightsTensor.Shape != null && weightsTensor.Shape.Length == 2)
426+
{
427+
layer.OutputSize = weightsTensor.Shape[0];
428+
layer.InputSize = weightsTensor.Shape[1];
429+
}
430+
else if (weightsTensor.Data.Length > 0)
431+
{
432+
// Fallback for 1D or missing shape: infer square matrix (legacy behavior)
433+
var sqrtLen = (int)Math.Sqrt(weightsTensor.Data.Length);
434+
layer.InputSize = sqrtLen;
435+
layer.OutputSize = sqrtLen;
436+
}
311437
}
312438

313439
// Extract bias if present
314440
var biasKey = op.Inputs.Count > 2 ? op.Inputs[2] : null;
315-
if (biasKey != null && initializers.TryGetValue(biasKey, out var bias))
441+
if (biasKey != null && initializers.TryGetValue(biasKey, out var biasTensor))
316442
{
317-
layer.Bias = bias;
443+
layer.Bias = biasTensor.Data;
318444
layer.HasBias = true;
319445
}
320446
break;
@@ -349,7 +475,7 @@ internal class OnnxGraphInfo
349475
{
350476
public string Name { get; set; } = string.Empty;
351477
public List<OnnxNode> Operations { get; set; } = new();
352-
public Dictionary<string, float[]> Initializers { get; set; } = new();
478+
public Dictionary<string, OnnxTensor> Initializers { get; set; } = new();
353479
public List<OnnxValueInfo> Inputs { get; set; } = new();
354480
public List<OnnxValueInfo> Outputs { get; set; } = new();
355481
}
@@ -373,3 +499,13 @@ internal class OnnxValueInfo
373499
public string Name { get; set; } = string.Empty;
374500
public int[] Shape { get; set; } = Array.Empty<int>();
375501
}
502+
503+
/// <summary>
504+
/// ONNX tensor with data and shape information.
505+
/// </summary>
506+
internal class OnnxTensor
507+
{
508+
public string Name { get; set; } = string.Empty;
509+
public float[] Data { get; set; } = Array.Empty<float>();
510+
public int[] Shape { get; set; } = Array.Empty<int>();
511+
}

src/Deployment/TensorRT/TensorRTConfiguration.cs

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -157,12 +157,11 @@ public static TensorRTConfiguration ForHighThroughput(int batchSize = 64, string
157157
/// Creates a configuration with INT8 quantization.
158158
/// </summary>
159159
/// <param name="calibrationDataPath">Path to calibration data file (required for INT8 quantization)</param>
160-
/// <exception cref="ArgumentNullException">Thrown when calibrationDataPath is null or whitespace</exception>
160+
/// <exception cref="ArgumentException">Thrown when calibrationDataPath is null or whitespace</exception>
161161
public static TensorRTConfiguration ForInt8(string calibrationDataPath)
162162
{
163163
if (string.IsNullOrWhiteSpace(calibrationDataPath))
164-
throw new ArgumentNullException(nameof(calibrationDataPath),
165-
"Calibration data path is required for INT8 quantization");
164+
throw new ArgumentException("Calibration data path cannot be null or whitespace", nameof(calibrationDataPath));
166165

167166
return new TensorRTConfiguration
168167
{

src/Models/Results/PredictionModelResult.cs

Lines changed: 27 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -995,7 +995,8 @@ public IFullModel<T, TInput, TOutput> WithParameters(Vector<T> parameters)
995995
updatedOptimizationResult.BestSolution = newModel;
996996

997997
// Create new result with updated optimization result
998-
// Use constructor that preserves BiasDetector, FairnessEvaluator, and RAG components
998+
// Preserve all configuration properties to ensure deployment behavior, model adaptation,
999+
// and training history are maintained across parameter updates
9991000
return new PredictionModelResult<T, TInput, TOutput>(
10001001
updatedOptimizationResult,
10011002
NormalizationInfo,
@@ -1004,7 +1005,12 @@ public IFullModel<T, TInput, TOutput> WithParameters(Vector<T> parameters)
10041005
RagRetriever,
10051006
RagReranker,
10061007
RagGenerator,
1007-
QueryProcessors);
1008+
QueryProcessors,
1009+
loraConfiguration: LoRAConfiguration,
1010+
crossValidationResult: CrossValidationResult,
1011+
agentConfig: AgentConfig,
1012+
agentRecommendation: AgentRecommendation,
1013+
deploymentConfiguration: DeploymentConfiguration);
10081014
}
10091015

10101016
/// <summary>
@@ -1090,7 +1096,8 @@ public IFullModel<T, TInput, TOutput> DeepCopy()
10901096

10911097
var clonedNormalizationInfo = NormalizationInfo.DeepCopy();
10921098

1093-
// Use constructor that preserves BiasDetector, FairnessEvaluator, and RAG components
1099+
// Preserve all configuration properties to ensure deployment behavior, model adaptation,
1100+
// and training history are maintained across deep copy
10941101
return new PredictionModelResult<T, TInput, TOutput>(
10951102
clonedOptimizationResult,
10961103
clonedNormalizationInfo,
@@ -1099,7 +1106,12 @@ public IFullModel<T, TInput, TOutput> DeepCopy()
10991106
RagRetriever,
11001107
RagReranker,
11011108
RagGenerator,
1102-
QueryProcessors);
1109+
QueryProcessors,
1110+
loraConfiguration: LoRAConfiguration,
1111+
crossValidationResult: CrossValidationResult,
1112+
agentConfig: AgentConfig,
1113+
agentRecommendation: AgentRecommendation,
1114+
deploymentConfiguration: DeploymentConfiguration);
11031115
}
11041116

11051117
/// <summary>
@@ -1218,6 +1230,17 @@ public void Deserialize(byte[] data)
12181230
ModelMetaData = deserializedObject.ModelMetaData;
12191231
BiasDetector = deserializedObject.BiasDetector;
12201232
FairnessEvaluator = deserializedObject.FairnessEvaluator;
1233+
1234+
// Preserve RAG components and all configuration properties
1235+
RagRetriever = deserializedObject.RagRetriever;
1236+
RagReranker = deserializedObject.RagReranker;
1237+
RagGenerator = deserializedObject.RagGenerator;
1238+
QueryProcessors = deserializedObject.QueryProcessors;
1239+
LoRAConfiguration = deserializedObject.LoRAConfiguration;
1240+
CrossValidationResult = deserializedObject.CrossValidationResult;
1241+
AgentConfig = deserializedObject.AgentConfig;
1242+
AgentRecommendation = deserializedObject.AgentRecommendation;
1243+
DeploymentConfiguration = deserializedObject.DeploymentConfiguration;
12211244
}
12221245
else
12231246
{

0 commit comments

Comments
 (0)