Skip to content

Added InputBlockV2 support to DeepFMModel (refactored and fixed) and DCNModel#717

Merged
gabrielspmoreira merged 18 commits intomainfrom
ranking_models_inputs
Sep 20, 2022
Merged

Added InputBlockV2 support to DeepFMModel (refactored and fixed) and DCNModel#717
gabrielspmoreira merged 18 commits intomainfrom
ranking_models_inputs

Conversation

@gabrielspmoreira
Copy link
Member

@gabrielspmoreira gabrielspmoreira commented Sep 6, 2022

Fixes #604

Goals ⚽

  • Sets InputBlockV2 as the default input block for DCNModel and DeepFMModel
  • Creates ToSparseFeatures and ToDenseFeatures to convert tensor types for selected or all features
  • Refactors DeepFMModel to extract the FMBlock and fixes the implemenation to match the one described in the original papers: (Fixes [BUG] DeepFM implementation is incorrect and misses tests #742 )
[1] Huifeng, Guo, et al. "DeepFM: A Factorization-Machine based Neural Network for CTR Prediction". arXiv:1703.04247  (2017).
[2] Steffen, Rendle, "Factorization Machines" IEEE International Conference on Data Mining, 2010.

Implementation Details 🚧

  • Previously the InputBlock was used for DCNModel and DeepFMModel
  • For DCNModel, the user were allowed to set embedding_options. Now the user can provide its own InputBlock, or provide custom args for the Embeddings() via InputBlockV2(..., **kwargs)
  • The previous implementation of DeepFMModel had the following issues that are fixed here:
    • In the wide part the one-hot categorical features was not sparse, which might cause OOM for high cardinality features.
    • The DeepFMModel supported only categorical features. If continuous features were passed the branch with FMPairwiseInteraction an error were raised
    • The FM and deep tower output should be 1d and summed. But instead those towers were outputing multiple dimensions (>1), which were projected to a single dim by an MLP layer, which is not part of the original architectures described in the papers.

Testing Details 🔍

  • There was no test for DeepFMModel, so added some and fixed bugs

@gabrielspmoreira gabrielspmoreira self-assigned this Sep 6, 2022
@gabrielspmoreira gabrielspmoreira added area/api area/ranking chore Maintenance for the repository breaking Breaking change labels Sep 6, 2022
@gabrielspmoreira gabrielspmoreira added this to the Merlin 22.09 milestone Sep 6, 2022
@gabrielspmoreira gabrielspmoreira removed the breaking Breaking change label Sep 6, 2022
@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit 5e8b254f7cc54689f3eeb5cc9605d728829bbb90, no merge conflicts.
Running as SYSTEM
Setting status of 5e8b254f7cc54689f3eeb5cc9605d728829bbb90 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1140/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse 5e8b254f7cc54689f3eeb5cc9605d728829bbb90^{commit} # timeout=10
Checking out Revision 5e8b254f7cc54689f3eeb5cc9605d728829bbb90 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5e8b254f7cc54689f3eeb5cc9605d728829bbb90 # timeout=10
Commit message: "Added support to InputBlockV2 to DCNModel and DeepFMModel"
 > git rev-list --no-walk 49f1b815499fa1aaab3dc847c193b67aca24be1f # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins2149850330022829085.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.6)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.3.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.9.1)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.4)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.2.1)
Requirement already satisfied: tornado>=6.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 683 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 9%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py . [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 27%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 31%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/core/test_transformations.py s............................ [ 35%]
.................. [ 38%]
tests/unit/tf/data_augmentation/test_misc.py . [ 38%]
tests/unit/tf/data_augmentation/test_negative_sampling.py .......... [ 40%]
tests/unit/tf/data_augmentation/test_noise.py ..... [ 40%]
tests/unit/tf/examples/test_01_getting_started.py . [ 40%]
tests/unit/tf/examples/test_02_dataschema.py . [ 41%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 41%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 41%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 41%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 41%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 41%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 42%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 42%]
tests/unit/tf/inputs/test_continuous.py ..... [ 42%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 47%]
..... [ 48%]
tests/unit/tf/inputs/test_tabular.py .................. [ 51%]
tests/unit/tf/layers/test_queue.py .............. [ 53%]
tests/unit/tf/losses/test_losses.py ....................... [ 56%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 57%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 60%]
tests/unit/tf/models/test_base.py s......F......... [ 63%]
tests/unit/tf/models/test_benchmark.py .. [ 63%]
tests/unit/tf/models/test_ranking.py ..............FF................ [ 68%]
tests/unit/tf/models/test_retrieval.py ................................ [ 72%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 73%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 75%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 76%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 76%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 76%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 77%]
tests/unit/tf/predictions/test_base.py ..... [ 78%]
tests/unit/tf/predictions/test_classification.py ....... [ 79%]
tests/unit/tf/predictions/test_dot_product.py ........ [ 80%]
tests/unit/tf/predictions/test_regression.py .. [ 80%]
tests/unit/tf/predictions/test_sampling.py .... [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 81%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 83%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 84%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
____________________ test_train_metrics_steps[60-10-3-6-2] _____________________

num_rows = 60, batch_size = 10, train_metrics_steps = 3, expected_steps = 6
expected_metrics_steps = 2

@pytest.mark.parametrize(
    ["num_rows", "batch_size", "train_metrics_steps", "expected_steps", "expected_metrics_steps"],
    [
        (1, 1, 1, 1, 1),
        (60, 10, 2, 6, 3),
        (60, 10, 3, 6, 2),
        (120, 10, 4, 12, 3),
    ],
)
def test_train_metrics_steps(
    num_rows, batch_size, train_metrics_steps, expected_steps, expected_metrics_steps
):
    dataset = generate_data("e-commerce", num_rows=num_rows)
    model = ml.Model(
        ml.InputBlock(dataset.schema),
        ml.MLPBlock([64]),
        ml.BinaryClassificationTask("click"),
    )
    model.compile(
        run_eagerly=True,
        optimizer="adam",
        metrics=[tf.keras.metrics.AUC(from_logits=True, name="auc")],
    )
    metrics_callback = MetricsLogger()
    callbacks = [metrics_callback]
    _ = model.fit(
        dataset,
        callbacks=callbacks,
        epochs=1,
        batch_size=batch_size,
        train_metrics_steps=train_metrics_steps,
    )
    epoch0_logs = metrics_callback.epoch_logs[0]

    # number of times compute_metrics called (number of batches in epoch)
    assert len(epoch0_logs) == expected_steps

    # number of times metrics computed (every train_metrics_steps batches)
  assert len({metrics["auc"] for metrics in epoch0_logs}) == expected_metrics_steps

E assert 1 == 2
E + where 1 = len({0.4166666865348816})

tests/unit/tf/models/test_base.py:139: AssertionError
----------------------------- Captured stdout call -----------------------------

1/6 [====>.........................] - ETA: 1s - loss: 0.7019 - auc: 0.4167 - regularization_loss: 0.0000e+00�������������������������������������������������������������������������������������������������������������
2/6 [=========>....................] - ETA: 0s - loss: 0.7014 - auc: 0.4167 - regularization_loss: 0.0000e+00�������������������������������������������������������������������������������������������������������������
3/6 [==============>...............] - ETA: 0s - loss: 0.7013 - auc: 0.4167 - regularization_loss: 0.0000e+00�������������������������������������������������������������������������������������������������������������
4/6 [===================>..........] - ETA: 0s - loss: 0.6997 - auc: 0.4167 - regularization_loss: 0.0000e+00�������������������������������������������������������������������������������������������������������������
5/6 [========================>.....] - ETA: 0s - loss: 0.7022 - auc: 0.4167 - regularization_loss: 0.0000e+00�������������������������������������������������������������������������������������������������������������
6/6 [==============================] - ETA: 0s - loss: 0.6984 - auc: 0.4167 - regularization_loss: 0.0000e+00�������������������������������������������������������������������������������������������������������������
6/6 [==============================] - 1s 130ms/step - loss: 0.6984 - auc: 0.4167 - regularization_loss: 0.0000e+00
___________________________ test_deepfm_model[True] ____________________________

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7f2e22806580>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:171:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:725: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/models/base.py:923: in call
outputs, context = self._call_child(block, outputs, context)
merlin/models/tf/models/base.py:952: in _call_child
outputs = call_layer(child, inputs, **call_kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:497: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:289: in call
outputs = call_layer(layer, outputs, training=training, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:497: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/transformations.py:569: in call
outputs[name] = utils.encode_categorical_inputs(
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:124: in encode_categorical_inputs
bincounts = dense_bincount(inputs, depth, binary_output, dtype,
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:68: in dense_bincount
result = tf.math.bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/bincount_ops.py:211: in bincount
return gen_math_ops.dense_bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_math_ops.py:3020: in dense_bincount
_ops.raise_from_not_ok_status(e, name)


e = _NotOkStatusException(), name = None

def raise_from_not_ok_status(e, name):
  e.message += (" name: " + name if name is not None else "")
raise core._status_to_exception(e) from None  # pylint: disable=protected-access

E tensorflow.python.framework.errors_impl.UnimplementedError: Exception encountered when calling layer "category_encoding" (type CategoryEncoding).
E
E Determinism is not yet supported in GPU implementation of DenseBincount. [Op:DenseBincount]
E
E Call arguments received by layer "category_encoding" (type CategoryEncoding):
E • inputs={'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)'}
E • kwargs={'training': 'False', 'features': {'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)'}, 'testing': 'False'}

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:7164: UnimplementedError
___________________________ test_deepfm_model[False] ___________________________

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7f2e294ddc70>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:171:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:725: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/models/base.py:923: in call
outputs, context = self._call_child(block, outputs, context)
merlin/models/tf/models/base.py:952: in _call_child
outputs = call_layer(child, inputs, **call_kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:497: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:289: in call
outputs = call_layer(layer, outputs, training=training, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:497: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/transformations.py:569: in call
outputs[name] = utils.encode_categorical_inputs(
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:124: in encode_categorical_inputs
bincounts = dense_bincount(inputs, depth, binary_output, dtype,
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:68: in dense_bincount
result = tf.math.bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/bincount_ops.py:211: in bincount
return gen_math_ops.dense_bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_math_ops.py:3020: in dense_bincount
_ops.raise_from_not_ok_status(e, name)


e = _NotOkStatusException(), name = None

def raise_from_not_ok_status(e, name):
  e.message += (" name: " + name if name is not None else "")
raise core._status_to_exception(e) from None  # pylint: disable=protected-access

E tensorflow.python.framework.errors_impl.UnimplementedError: Exception encountered when calling layer "category_encoding" (type CategoryEncoding).
E
E Determinism is not yet supported in GPU implementation of DenseBincount. [Op:DenseBincount]
E
E Call arguments received by layer "category_encoding" (type CategoryEncoding):
E • inputs={'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)'}
E • kwargs={'training': 'False', 'features': {'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)'}, 'testing': 'False'}

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:7164: UnimplementedError
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 13 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/data_augmentation/test_noise.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 10 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 34 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py::test_synthetic_aliccp_raw_data
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:384: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/data_augmentation/test_negative_sampling.py: 9 warnings
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 18 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/predictions/test_classification.py: 12 warnings
tests/unit/tf/predictions/test_dot_product.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_fileq9g8moqx.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.7]
tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/transformations.py:980: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
===== 3 failed, 669 passed, 11 skipped, 1017 warnings in 947.00s (0:15:47) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins17529944726332883267.sh


# TODO: Identify why we need to set the schema manually for `InputBlockV2`
# to avoid an error of ParallelBlock without schema?
input_block.set_schema(schema)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@marcromeyn I had to include this hack because the InputBlockV2 returns a ParallelBlock without schema in this case.
I suspect that is the case because the data for this test contains only categorical features and the Filter() for continuous features gets empty, so that not all branches within ParallelBlock contains a schema.
Is this related to your recent refactory that makes ParallelBlock schema-aware?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This hack is not needed anymore, as @marcromeyn implemented a fix for that on InputBlockV2 in this PR

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit e4c777ceda6d7b063c6f7df40fb181c47dfc9002, no merge conflicts.
Running as SYSTEM
Setting status of e4c777ceda6d7b063c6f7df40fb181c47dfc9002 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1144/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse e4c777ceda6d7b063c6f7df40fb181c47dfc9002^{commit} # timeout=10
Checking out Revision e4c777ceda6d7b063c6f7df40fb181c47dfc9002 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e4c777ceda6d7b063c6f7df40fb181c47dfc9002 # timeout=10
Commit message: "Fixed error on Jenkins GPU CI tests (an error occurs when CategoryEncoding is called with sparse=False for CI machine)"
 > git rev-list --no-walk 1d48d951a4fb336739b6ffde04ed295dd8928290 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins10010199132761638169.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.6)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.3.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.9.1)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.4)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.2.1)
Requirement already satisfied: tornado>=6.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 683 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 9%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py . [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 27%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 31%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/core/test_transformations.py s............................ [ 35%]
.................. [ 38%]
tests/unit/tf/data_augmentation/test_misc.py . [ 38%]
tests/unit/tf/data_augmentation/test_negative_sampling.py .......... [ 40%]
tests/unit/tf/data_augmentation/test_noise.py ..... [ 40%]
tests/unit/tf/examples/test_01_getting_started.py . [ 40%]
tests/unit/tf/examples/test_02_dataschema.py . [ 41%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 41%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 41%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 41%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 41%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 41%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 42%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 42%]
tests/unit/tf/inputs/test_continuous.py ..... [ 42%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 47%]
..... [ 48%]
tests/unit/tf/inputs/test_tabular.py .................. [ 51%]
tests/unit/tf/layers/test_queue.py .............. [ 53%]
tests/unit/tf/losses/test_losses.py ....................... [ 56%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 57%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 60%]
tests/unit/tf/models/test_base.py s................ [ 63%]
tests/unit/tf/models/test_benchmark.py .. [ 63%]
tests/unit/tf/models/test_ranking.py ................................ [ 68%]
tests/unit/tf/models/test_retrieval.py ................................ [ 72%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 73%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 75%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 76%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 76%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 76%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 77%]
tests/unit/tf/predictions/test_base.py ..... [ 78%]
tests/unit/tf/predictions/test_classification.py ....... [ 79%]
tests/unit/tf/predictions/test_dot_product.py ........ [ 80%]
tests/unit/tf/predictions/test_regression.py .. [ 80%]
tests/unit/tf/predictions/test_sampling.py .... [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 81%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 83%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 84%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 13 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/data_augmentation/test_noise.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 10 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 34 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py::test_synthetic_aliccp_raw_data
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:384: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/data_augmentation/test_negative_sampling.py: 9 warnings
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 18 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/predictions/test_classification.py: 12 warnings
tests/unit/tf/predictions/test_dot_product.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_fileubbsdxd3.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.7]
tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/sequential_block_4/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/sequential_block_4/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/sequential_block_4/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/transformations.py:980: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========== 672 passed, 11 skipped, 1018 warnings in 950.72s (0:15:50) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins12423255128051031398.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit 53d1ec95c7f4ac56c18b74d872e3818a8bea443a, no merge conflicts.
Running as SYSTEM
Setting status of 53d1ec95c7f4ac56c18b74d872e3818a8bea443a to PENDING with url https://10.20.13.93:8080/job/merlin_models/1173/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse 53d1ec95c7f4ac56c18b74d872e3818a8bea443a^{commit} # timeout=10
Checking out Revision 53d1ec95c7f4ac56c18b74d872e3818a8bea443a (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 53d1ec95c7f4ac56c18b74d872e3818a8bea443a # timeout=10
Commit message: "Removing the hack that set the schema of input block for DeepFMModel"
 > git rev-list --no-walk fbd2083e16db79c010aa95ff7a372c102773a00c # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins2898401193199117777.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.6)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.3.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.9.1)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.4)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.2.1)
Requirement already satisfied: tornado>=6.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 686 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 9%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py . [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 26%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 31%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/core/test_transformations.py s............................ [ 35%]
.................. [ 38%]
tests/unit/tf/data_augmentation/test_misc.py . [ 38%]
tests/unit/tf/data_augmentation/test_negative_sampling.py .......... [ 40%]
tests/unit/tf/data_augmentation/test_noise.py ..... [ 40%]
tests/unit/tf/examples/test_01_getting_started.py . [ 40%]
tests/unit/tf/examples/test_02_dataschema.py . [ 41%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 41%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 41%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 41%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 41%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 41%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 41%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 42%]
tests/unit/tf/inputs/test_continuous.py ..... [ 42%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 47%]
..... [ 48%]
tests/unit/tf/inputs/test_tabular.py .................. [ 51%]
tests/unit/tf/layers/test_queue.py .............. [ 53%]
tests/unit/tf/losses/test_losses.py ....................... [ 56%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 57%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 60%]
tests/unit/tf/models/test_base.py s................ [ 62%]
tests/unit/tf/models/test_benchmark.py .. [ 63%]
tests/unit/tf/models/test_ranking.py ..............FFFF................ [ 68%]
tests/unit/tf/models/test_retrieval.py ................................ [ 72%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 73%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 75%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 76%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 76%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 76%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 77%]
tests/unit/tf/predictions/test_base.py ..... [ 78%]
tests/unit/tf/predictions/test_classification.py ....... [ 79%]
tests/unit/tf/predictions/test_dot_product.py ........ [ 80%]
tests/unit/tf/predictions/test_regression.py .. [ 80%]
tests/unit/tf/predictions/test_sampling.py .... [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 81%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 83%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 84%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
___________________ test_deepfm_model_only_categ_feats[True] ___________________

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7f692b0f3460>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_only_categ_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:171:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:725: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/models/base.py:923: in call
outputs, context = self._call_child(block, outputs, context)
merlin/models/tf/models/base.py:952: in _call_child
outputs = call_layer(child, inputs, **call_kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:492: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:269: in call
return call_sequentially(self.layers, inputs, training=training, **kwargs)
merlin/models/tf/core/combinators.py:752: in call_sequentially
outputs = call_layer(layer, outputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:492: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/transformations.py:569: in call
outputs[name] = utils.encode_categorical_inputs(
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:124: in encode_categorical_inputs
bincounts = dense_bincount(inputs, depth, binary_output, dtype,
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:68: in dense_bincount
result = tf.math.bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/bincount_ops.py:211: in bincount
return gen_math_ops.dense_bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_math_ops.py:3020: in dense_bincount
_ops.raise_from_not_ok_status(e, name)


e = _NotOkStatusException(), name = None

def raise_from_not_ok_status(e, name):
  e.message += (" name: " + name if name is not None else "")
raise core._status_to_exception(e) from None  # pylint: disable=protected-access

E tensorflow.python.framework.errors_impl.UnimplementedError: Exception encountered when calling layer "category_encoding" (type CategoryEncoding).
E
E Determinism is not yet supported in GPU implementation of DenseBincount. [Op:DenseBincount]
E
E Call arguments received by layer "category_encoding" (type CategoryEncoding):
E • inputs={'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)'}
E • kwargs={'training': 'False', 'features': {'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)'}, 'testing': 'False'}

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:7164: UnimplementedError
__________________ test_deepfm_model_only_categ_feats[False] ___________________

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7f695031dbb0>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_only_categ_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:171:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:725: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/models/base.py:923: in call
outputs, context = self._call_child(block, outputs, context)
merlin/models/tf/models/base.py:952: in _call_child
outputs = call_layer(child, inputs, **call_kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:492: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:269: in call
return call_sequentially(self.layers, inputs, training=training, **kwargs)
merlin/models/tf/core/combinators.py:752: in call_sequentially
outputs = call_layer(layer, outputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:492: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/transformations.py:569: in call
outputs[name] = utils.encode_categorical_inputs(
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:124: in encode_categorical_inputs
bincounts = dense_bincount(inputs, depth, binary_output, dtype,
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:68: in dense_bincount
result = tf.math.bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/bincount_ops.py:211: in bincount
return gen_math_ops.dense_bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_math_ops.py:3020: in dense_bincount
_ops.raise_from_not_ok_status(e, name)


e = _NotOkStatusException(), name = None

def raise_from_not_ok_status(e, name):
  e.message += (" name: " + name if name is not None else "")
raise core._status_to_exception(e) from None  # pylint: disable=protected-access

E tensorflow.python.framework.errors_impl.UnimplementedError: Exception encountered when calling layer "category_encoding" (type CategoryEncoding).
E
E Determinism is not yet supported in GPU implementation of DenseBincount. [Op:DenseBincount]
E
E Call arguments received by layer "category_encoding" (type CategoryEncoding):
E • inputs={'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)'}
E • kwargs={'training': 'False', 'features': {'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)'}, 'testing': 'False'}

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:7164: UnimplementedError
______________ test_deepfm_model_categ_and_continuous_feats[True] ______________

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7f6932c631f0>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_categ_and_continuous_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "user_age", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:186:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:725: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/models/base.py:923: in call
outputs, context = self._call_child(block, outputs, context)
merlin/models/tf/models/base.py:952: in _call_child
outputs = call_layer(child, inputs, **call_kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:492: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:269: in call
return call_sequentially(self.layers, inputs, training=training, **kwargs)
merlin/models/tf/core/combinators.py:752: in call_sequentially
outputs = call_layer(layer, outputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:492: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/transformations.py:569: in call
outputs[name] = utils.encode_categorical_inputs(
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:124: in encode_categorical_inputs
bincounts = dense_bincount(inputs, depth, binary_output, dtype,
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:68: in dense_bincount
result = tf.math.bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/bincount_ops.py:211: in bincount
return gen_math_ops.dense_bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_math_ops.py:3020: in dense_bincount
_ops.raise_from_not_ok_status(e, name)


e = _NotOkStatusException(), name = None

def raise_from_not_ok_status(e, name):
  e.message += (" name: " + name if name is not None else "")
raise core._status_to_exception(e) from None  # pylint: disable=protected-access

E tensorflow.python.framework.errors_impl.UnimplementedError: Exception encountered when calling layer "category_encoding" (type CategoryEncoding).
E
E Determinism is not yet supported in GPU implementation of DenseBincount. [Op:DenseBincount]
E
E Call arguments received by layer "category_encoding" (type CategoryEncoding):
E • inputs={'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)'}
E • kwargs={'training': 'False', 'features': {'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_age': 'tf.Tensor(shape=(50, 1), dtype=int64)'}, 'testing': 'False'}

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:7164: UnimplementedError
_____________ test_deepfm_model_categ_and_continuous_feats[False] ______________

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7f6933989100>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_categ_and_continuous_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "user_age", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:186:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:725: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/models/base.py:923: in call
outputs, context = self._call_child(block, outputs, context)
merlin/models/tf/models/base.py:952: in _call_child
outputs = call_layer(child, inputs, **call_kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:492: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:269: in call
return call_sequentially(self.layers, inputs, training=training, **kwargs)
merlin/models/tf/core/combinators.py:752: in call_sequentially
outputs = call_layer(layer, outputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:96: in error_handler
raise e
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/combinators.py:492: in call
out = call_layer(layer, layer_inputs, **kwargs)
merlin/models/tf/utils/tf_utils.py:403: in call_layer
return layer(inputs, *args, **filtered_kwargs)
merlin/models/tf/core/tabular.py:483: in _tabular_call
outputs = self.super().call(inputs, *args, **kwargs) # type: ignore
merlin/models/config/schema.py:58: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1014: in call
outputs = call_fn(inputs, *args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:146: in error_handler
raise new_e.with_traceback(e.traceback) from None
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:92: in error_handler
return fn(*args, **kwargs)
merlin/models/tf/core/transformations.py:569: in call
outputs[name] = utils.encode_categorical_inputs(
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:124: in encode_categorical_inputs
bincounts = dense_bincount(inputs, depth, binary_output, dtype,
/usr/local/lib/python3.8/dist-packages/keras/layers/preprocessing/preprocessing_utils.py:68: in dense_bincount
result = tf.math.bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/bincount_ops.py:211: in bincount
return gen_math_ops.dense_bincount(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/ops/gen_math_ops.py:3020: in dense_bincount
_ops.raise_from_not_ok_status(e, name)


e = _NotOkStatusException(), name = None

def raise_from_not_ok_status(e, name):
  e.message += (" name: " + name if name is not None else "")
raise core._status_to_exception(e) from None  # pylint: disable=protected-access

E tensorflow.python.framework.errors_impl.UnimplementedError: Exception encountered when calling layer "category_encoding" (type CategoryEncoding).
E
E Determinism is not yet supported in GPU implementation of DenseBincount. [Op:DenseBincount]
E
E Call arguments received by layer "category_encoding" (type CategoryEncoding):
E • inputs={'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)'}
E • kwargs={'training': 'False', 'features': {'item_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'item_category': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_id': 'tf.Tensor(shape=(50, 1), dtype=int64)', 'user_age': 'tf.Tensor(shape=(50, 1), dtype=int64)'}, 'testing': 'False'}

/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/ops.py:7164: UnimplementedError
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 13 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/data_augmentation/test_noise.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 10 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py::test_synthetic_aliccp_raw_data
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:384: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/data_augmentation/test_negative_sampling.py: 9 warnings
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/predictions/test_classification.py: 12 warnings
tests/unit/tf/predictions/test_dot_product.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file6e3x9b0z.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.7]
tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/transformations.py:980: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
===== 4 failed, 671 passed, 11 skipped, 1028 warnings in 966.00s (0:16:05) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins8099385560801091154.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit cb87acc168fd00b7bc2df168e5e3978fc0d3bf29, no merge conflicts.
Running as SYSTEM
Setting status of cb87acc168fd00b7bc2df168e5e3978fc0d3bf29 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1175/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse cb87acc168fd00b7bc2df168e5e3978fc0d3bf29^{commit} # timeout=10
Checking out Revision cb87acc168fd00b7bc2df168e5e3978fc0d3bf29 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f cb87acc168fd00b7bc2df168e5e3978fc0d3bf29 # timeout=10
Commit message: "Added ToSparseFeatures and ToDenseFeatures"
 > git rev-list --no-walk de8faf947cc7399b547067903439225244180b4d # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins3530415144233309613.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.6)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.3.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.9.1)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.4)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.2.1)
Requirement already satisfied: tornado>=6.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 686 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 9%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py . [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 26%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 31%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/core/test_transformations.py s............................ [ 35%]
.................. [ 38%]
tests/unit/tf/data_augmentation/test_misc.py . [ 38%]
tests/unit/tf/data_augmentation/test_negative_sampling.py .......... [ 40%]
tests/unit/tf/data_augmentation/test_noise.py ..... [ 40%]
tests/unit/tf/examples/test_01_getting_started.py . [ 40%]
tests/unit/tf/examples/test_02_dataschema.py . [ 41%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 41%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 41%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 41%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 41%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 41%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 41%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 42%]
tests/unit/tf/inputs/test_continuous.py ..... [ 42%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 47%]
..... [ 48%]
tests/unit/tf/inputs/test_tabular.py .................. [ 51%]
tests/unit/tf/layers/test_queue.py .............. [ 53%]
tests/unit/tf/losses/test_losses.py ....................... [ 56%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 57%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 60%]
tests/unit/tf/models/test_base.py s................ [ 62%]
tests/unit/tf/models/test_benchmark.py .. [ 63%]
tests/unit/tf/models/test_ranking.py .................................. [ 68%]
tests/unit/tf/models/test_retrieval.py ................................ [ 72%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 73%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 75%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 76%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 76%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 76%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 77%]
tests/unit/tf/predictions/test_base.py ..... [ 78%]
tests/unit/tf/predictions/test_classification.py ....... [ 79%]
tests/unit/tf/predictions/test_dot_product.py ........ [ 80%]
tests/unit/tf/predictions/test_regression.py .. [ 80%]
tests/unit/tf/predictions/test_sampling.py .... [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 81%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 83%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 84%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 13 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/data_augmentation/test_noise.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 10 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py::test_synthetic_aliccp_raw_data
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:384: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/data_augmentation/test_negative_sampling.py: 9 warnings
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/predictions/test_classification.py: 12 warnings
tests/unit/tf/predictions/test_dot_product.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_fileda20_r3u.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.7]
tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/sequential_block_5/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/sequential_block_5/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/sequential_block_5/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/transformations.py:1078: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========== 675 passed, 11 skipped, 1030 warnings in 963.02s (0:16:03) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins6133307794422847305.sh

@gabrielspmoreira gabrielspmoreira changed the title Added support to InputBlockV2 to DCNModel and DeepFMModel Added support to InputBlockV2 to DCNModel and DeepFMModel (refactored and fixed) Sep 9, 2022
@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit c18fc738224e55beaff4cb6002eb8b5e4451aef1, no merge conflicts.
Running as SYSTEM
Setting status of c18fc738224e55beaff4cb6002eb8b5e4451aef1 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1176/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse c18fc738224e55beaff4cb6002eb8b5e4451aef1^{commit} # timeout=10
Checking out Revision c18fc738224e55beaff4cb6002eb8b5e4451aef1 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c18fc738224e55beaff4cb6002eb8b5e4451aef1 # timeout=10
Commit message: "Adjusting FMBlock"
 > git rev-list --no-walk cb87acc168fd00b7bc2df168e5e3978fc0d3bf29 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins2083787317041757524.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.6)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.3.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.9.1)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.4)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.2.1)
Requirement already satisfied: tornado>=6.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 686 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 9%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py . [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 26%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 31%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/core/test_transformations.py s............................ [ 35%]
.................. [ 38%]
tests/unit/tf/data_augmentation/test_misc.py . [ 38%]
tests/unit/tf/data_augmentation/test_negative_sampling.py .......... [ 40%]
tests/unit/tf/data_augmentation/test_noise.py ..... [ 40%]
tests/unit/tf/examples/test_01_getting_started.py . [ 40%]
tests/unit/tf/examples/test_02_dataschema.py . [ 41%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 41%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 41%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 41%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 41%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 41%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 41%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 42%]
tests/unit/tf/inputs/test_continuous.py ..... [ 42%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 47%]
..... [ 48%]
tests/unit/tf/inputs/test_tabular.py .................. [ 51%]
tests/unit/tf/layers/test_queue.py .............. [ 53%]
tests/unit/tf/losses/test_losses.py ....................... [ 56%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 57%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 60%]
tests/unit/tf/models/test_base.py s................ [ 62%]
tests/unit/tf/models/test_benchmark.py .. [ 63%]
tests/unit/tf/models/test_ranking.py .................................. [ 68%]
tests/unit/tf/models/test_retrieval.py ................................ [ 72%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 73%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 75%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 76%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 76%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 76%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 77%]
tests/unit/tf/predictions/test_base.py ..... [ 78%]
tests/unit/tf/predictions/test_classification.py ....... [ 79%]
tests/unit/tf/predictions/test_dot_product.py ........ [ 80%]
tests/unit/tf/predictions/test_regression.py .. [ 80%]
tests/unit/tf/predictions/test_sampling.py .... [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 81%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 83%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 84%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 13 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/data_augmentation/test_noise.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 10 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py::test_synthetic_aliccp_raw_data
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:384: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/data_augmentation/test_negative_sampling.py: 9 warnings
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/predictions/test_classification.py: 12 warnings
tests/unit/tf/predictions/test_dot_product.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file0l5ezxq4.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.7]
tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/transformations.py:1078: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========== 675 passed, 11 skipped, 1030 warnings in 959.05s (0:15:59) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins14105039611225086074.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit 9eae80eca9cea11b6c1d7a2d51ecad5840417828, no merge conflicts.
Running as SYSTEM
Setting status of 9eae80eca9cea11b6c1d7a2d51ecad5840417828 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1177/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse 9eae80eca9cea11b6c1d7a2d51ecad5840417828^{commit} # timeout=10
Checking out Revision 9eae80eca9cea11b6c1d7a2d51ecad5840417828 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9eae80eca9cea11b6c1d7a2d51ecad5840417828 # timeout=10
Commit message: "Fixing lint issues"
 > git rev-list --no-walk c18fc738224e55beaff4cb6002eb8b5e4451aef1 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins5817833640320918358.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.6)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.3.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.9.1)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.4)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.2.1)
Requirement already satisfied: tornado>=6.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 686 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 9%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py . [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 26%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 31%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/core/test_transformations.py s............................ [ 35%]
.................. [ 38%]
tests/unit/tf/data_augmentation/test_misc.py . [ 38%]
tests/unit/tf/data_augmentation/test_negative_sampling.py .......... [ 40%]
tests/unit/tf/data_augmentation/test_noise.py ..... [ 40%]
tests/unit/tf/examples/test_01_getting_started.py . [ 40%]
tests/unit/tf/examples/test_02_dataschema.py . [ 41%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 41%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 41%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 41%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 41%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 41%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 41%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 42%]
tests/unit/tf/inputs/test_continuous.py ..... [ 42%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 47%]
..... [ 48%]
tests/unit/tf/inputs/test_tabular.py .................. [ 51%]
tests/unit/tf/layers/test_queue.py .............. [ 53%]
tests/unit/tf/losses/test_losses.py ....................... [ 56%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 57%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 60%]
tests/unit/tf/models/test_base.py s................ [ 62%]
tests/unit/tf/models/test_benchmark.py .. [ 63%]
tests/unit/tf/models/test_ranking.py .................................. [ 68%]
tests/unit/tf/models/test_retrieval.py ................................ [ 72%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 73%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 75%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 76%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 76%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 76%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 77%]
tests/unit/tf/predictions/test_base.py ..... [ 78%]
tests/unit/tf/predictions/test_classification.py ....... [ 79%]
tests/unit/tf/predictions/test_dot_product.py ........ [ 80%]
tests/unit/tf/predictions/test_regression.py .. [ 80%]
tests/unit/tf/predictions/test_sampling.py .... [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 81%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 83%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 84%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 13 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/data_augmentation/test_noise.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 10 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py::test_synthetic_aliccp_raw_data
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:384: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/data_augmentation/test_negative_sampling.py: 9 warnings
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/predictions/test_classification.py: 12 warnings
tests/unit/tf/predictions/test_dot_product.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file2lolhene.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.7]
tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/transformations.py:1078: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========== 675 passed, 11 skipped, 1030 warnings in 961.70s (0:16:01) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins3589659317840274405.sh

pre: Optional[BlockType] = None,
post: Optional[BlockType] = None,
aggregation: Optional[TabularAggregationType] = "concat",
**kwargs,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we're passing these kwargs to the Embeddings, maybe can rename this to embeddings_kwargs?

And why would these kwargs on the input block be passed to the embeddings instead of the ParallelBlock for example? If the embeddings needs to be customized from the default that can already be achieved by passing in the embeddings param?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense @oliverholworthy . As the user can always provide the embeddings as an argment, the **kwargs can be forwarded to another block (ParallelBlock)


pairwise_block = FMPairwiseInteraction().prepare(aggregation=StackFeatures(axis=-1))
deep_block = deep_block.prepare(aggregation=ConcatFeatures())
input_block = input_block or InputBlockV2(schema, dim=embedding_dim, aggregation=None, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps it would be clearer to change the default aggregation in the InputBlock to be None and change the places where an aggregation is required to pass it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only places where currently InputBlockV2(...., aggregation=None) are for DeepFM and FMBlock. So I it seems that is the exception and not the rule.

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit ee83e4f5cd27c5d0765f207a531da22032823d32, no merge conflicts.
Running as SYSTEM
Setting status of ee83e4f5cd27c5d0765f207a531da22032823d32 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1182/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse ee83e4f5cd27c5d0765f207a531da22032823d32^{commit} # timeout=10
Checking out Revision ee83e4f5cd27c5d0765f207a531da22032823d32 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ee83e4f5cd27c5d0765f207a531da22032823d32 # timeout=10
Commit message: "Making DeepFM accept wide_inputs for more flexibility (e.g. multi-hot support), fixing bugs and adding tests to FMBlock"
 > git rev-list --no-walk 1a8a386ba97107613b751c5d88bbba5a96cf4cca # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins12955174775313121952.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.6)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.3.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.9.1)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.4)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.2.1)
Requirement already satisfied: tornado>=6.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 688 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 9%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py ... [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 27%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 31%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/core/test_transformations.py s............................ [ 36%]
.................. [ 38%]
tests/unit/tf/data_augmentation/test_misc.py . [ 38%]
tests/unit/tf/data_augmentation/test_negative_sampling.py .......... [ 40%]
tests/unit/tf/data_augmentation/test_noise.py ..... [ 40%]
tests/unit/tf/examples/test_01_getting_started.py . [ 41%]
tests/unit/tf/examples/test_02_dataschema.py . [ 41%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 41%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 41%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 41%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 41%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 42%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 42%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 42%]
tests/unit/tf/inputs/test_continuous.py ..... [ 43%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 47%]
..... [ 48%]
tests/unit/tf/inputs/test_tabular.py .................. [ 51%]
tests/unit/tf/layers/test_queue.py .............. [ 53%]
tests/unit/tf/losses/test_losses.py ....................... [ 56%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 57%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 60%]
tests/unit/tf/models/test_base.py s................ [ 63%]
tests/unit/tf/models/test_benchmark.py .. [ 63%]
tests/unit/tf/models/test_ranking.py .................................. [ 68%]
tests/unit/tf/models/test_retrieval.py ................................ [ 72%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 73%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 75%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 76%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 76%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 76%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 77%]
tests/unit/tf/predictions/test_base.py ..... [ 78%]
tests/unit/tf/predictions/test_classification.py ....... [ 79%]
tests/unit/tf/predictions/test_dot_product.py ........ [ 80%]
tests/unit/tf/predictions/test_regression.py .. [ 80%]
tests/unit/tf/predictions/test_sampling.py .... [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 81%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 84%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 84%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 13 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/data_augmentation/test_noise.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 10 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py::test_synthetic_aliccp_raw_data
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:384: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/data_augmentation/test_negative_sampling.py: 9 warnings
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/predictions/test_classification.py: 12 warnings
tests/unit/tf/predictions/test_dot_product.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_filewwz_bklm.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.7]
tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/transformations.py:1077: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========== 677 passed, 11 skipped, 1034 warnings in 962.29s (0:16:02) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins11343461032109086018.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit d8454ebe55bea1c976169d314cb4bff261615c2c, no merge conflicts.
Running as SYSTEM
Setting status of d8454ebe55bea1c976169d314cb4bff261615c2c to PENDING with url https://10.20.13.93:8080/job/merlin_models/1183/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse d8454ebe55bea1c976169d314cb4bff261615c2c^{commit} # timeout=10
Checking out Revision d8454ebe55bea1c976169d314cb4bff261615c2c (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f d8454ebe55bea1c976169d314cb4bff261615c2c # timeout=10
Commit message: "Making DeepFM accept wide_inputs for more flexibility (e.g. multi-hot support), fixing bugs and adding tests to FMBlock"
 > git rev-list --no-walk ee83e4f5cd27c5d0765f207a531da22032823d32 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins16037232090776809191.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.6)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.3.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.9.1)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.4)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.2.1)
Requirement already satisfied: tornado>=6.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 688 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 9%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py ... [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 27%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 31%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/core/test_transformations.py s............................ [ 36%]
.................. [ 38%]
tests/unit/tf/data_augmentation/test_misc.py . [ 38%]
tests/unit/tf/data_augmentation/test_negative_sampling.py .......... [ 40%]
tests/unit/tf/data_augmentation/test_noise.py ..... [ 40%]
tests/unit/tf/examples/test_01_getting_started.py . [ 41%]
tests/unit/tf/examples/test_02_dataschema.py . [ 41%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 41%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 41%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 41%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 41%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 42%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 42%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 42%]
tests/unit/tf/inputs/test_continuous.py ..... [ 43%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 47%]
..... [ 48%]
tests/unit/tf/inputs/test_tabular.py .................. [ 51%]
tests/unit/tf/layers/test_queue.py .............. [ 53%]
tests/unit/tf/losses/test_losses.py ....................... [ 56%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 57%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 60%]
tests/unit/tf/models/test_base.py s................ [ 63%]
tests/unit/tf/models/test_benchmark.py .. [ 63%]
tests/unit/tf/models/test_ranking.py .................................. [ 68%]
tests/unit/tf/models/test_retrieval.py ................................ [ 72%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 73%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 75%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 76%]
tests/unit/tf/prediction_tasks/test_regression.py .. [ 76%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 76%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 77%]
tests/unit/tf/predictions/test_base.py ..... [ 78%]
tests/unit/tf/predictions/test_classification.py ....... [ 79%]
tests/unit/tf/predictions/test_dot_product.py ........ [ 80%]
tests/unit/tf/predictions/test_regression.py .. [ 80%]
tests/unit/tf/predictions/test_sampling.py .... [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 81%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 84%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 84%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 90%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 13 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/data_augmentation/test_noise.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 10 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 2 warnings
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py::test_synthetic_aliccp_raw_data
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:384: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/data_augmentation/test_negative_sampling.py: 9 warnings
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/predictions/test_classification.py: 12 warnings
tests/unit/tf/predictions/test_dot_product.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_filew__lhxty.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.7]
tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/transformations.py:1077: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========== 677 passed, 11 skipped, 1034 warnings in 964.24s (0:16:04) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins13190562239929945405.sh

if schema is not None:
self.column_names = schema.column_names

def call(self, inputs: TabularData, **kwargs) -> TabularData:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is intended to be a base class, is it safe to remove this call method?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, the base class FeaturesTensorTypeConversion should not have the call() method, as it need to be defined in child classes.

@gabrielspmoreira gabrielspmoreira changed the title Added support to InputBlockV2 to DCNModel and DeepFMModel (refactored and fixed) Added support to InputBlockV2 DeepFMModel (refactored and fixed) and DCNModel Sep 12, 2022
@gabrielspmoreira gabrielspmoreira changed the title Added support to InputBlockV2 DeepFMModel (refactored and fixed) and DCNModel Added InputBlockV2 support to DeepFMModel (refactored and fixed) and DCNModel Sep 12, 2022
{
"categorical_ohe": mm.SequentialBlock(
mm.Filter(cat_schema_onehot),
mm.CategoryEncoding(cat_schema_onehot, sparse=True, output_mode="one_hot"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If mm.CategoryEncoding already takes in a schema, can’t we make that class also do the filtering?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thing is that CategoryEncoding can be used with both output_mode="one_hot" and output_mode="multi_hot".
I am not sure we should trust within CategoryEncoding that multi-hot features will be tagged as SEQUENCE / LIST. So I used the test as an example on how the use can ensure the right features are selected for each branch


@Block.registry.register("to_sparse")
@tf.keras.utils.register_keras_serializable(package="merlin.models")
class ToSparseFeatures(FeaturesTensorTypeConversion):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This naming confuses me. In the moving transforms PR, I am proposing to rename our current layers to ListToSparse & ListToRagged (source).

I think it would make more sense to call this one ToSparse and make it take in an optional schema in the constructor. If a schema is provided, only the features in the schema will be converted.

Copy link
Member Author

@gabrielspmoreira gabrielspmoreira Sep 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure @marcromeyn. Renamed the new transformers to ToSparse and ToDense, and added tests to cover both cases (when schema is provided or not)

wide_input_block = wide_input_block or ParallelBlock(
{
"categorical": CategoryEncoding(cat_schema, output_mode="multi_hot", sparse=True),
"continuous": SequentialBlock(Filter(cont_schema), ToSparseFeatures()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be replace with ToSparse(cont_schema) with my previous comment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another maybe easier to read way could be:

InputBlockV2(
  cont_schema, 
  CategoryEncoding(cat_schema, output_mode="multi_hot", sparse=True),
  post=ToSparseFeatures()
)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CategoryEncoding needs the sparse argument for the usage of the right TF op for encoding. But I have converted the continuous branch from SequentialBlock(Filter(cont_schema), ToSparseFeatures()) to Filter(cont_schema, post=ToSparse())

)

first_order = wide_input_block.connect(
MLPBlock([1], activation="linear", use_bias=True, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like **kwargs is being forwarded to a few different blocks, if possible we’d like to remove this as much as possible since it leads to more understandable code. If you want to allow for full customization is would be cleaner if a param like wide_logit be added to FMBlock and make Dense(1, activation="linear", use_bias=True) the default. MLPBlock is not needed since we only need one dense-layer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. I was forwarding those **kwargs to be able to add regularization (dropout, kernel reg) to the wide part. But have created the wide_logit_block argument you suggested to make it more explicit.

)

fm_input_block = fm_input_block or InputBlockV2(
cat_schema, dim=factors_dim, aggregation=None, **kwargs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already expose a fm_input_block so I feel like forwaring **kwargs could be removed. Also, aggregation=None is the default so we could remove that here.

cat_schema, dim=factors_dim, aggregation=None, **kwargs
)
pairwise_interaction = SequentialBlock(
Filter(cat_schema),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The InputBlock already does filtering so this could be removed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should keep that Filter, as the user might want to provide his own fm_input_block, whose schema might not be filtered to output only categ features.

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit 48b7e4ade146052710796e64c8d7f6425fa53e38, no merge conflicts.
Running as SYSTEM
Setting status of 48b7e4ade146052710796e64c8d7f6425fa53e38 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1216/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse 48b7e4ade146052710796e64c8d7f6425fa53e38^{commit} # timeout=10
Checking out Revision 48b7e4ade146052710796e64c8d7f6425fa53e38 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 48b7e4ade146052710796e64c8d7f6425fa53e38 # timeout=10
Commit message: "Merge branch 'main' into ranking_models_inputs"
 > git rev-list --no-walk ba5c54b424b4b1fa849e8d876affb31106acf49d # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins15352671815863805251.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.4.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.6)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.3.0)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.9.1)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.4)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (23.2.1)
Requirement already satisfied: tornado>=6.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 691 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py ... [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 27%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 31%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/core/test_transformations.py s............................ [ 35%]
.................. [ 38%]
tests/unit/tf/data_augmentation/test_misc.py . [ 38%]
tests/unit/tf/data_augmentation/test_negative_sampling.py .......... [ 40%]
tests/unit/tf/data_augmentation/test_noise.py ..... [ 40%]
tests/unit/tf/examples/test_01_getting_started.py . [ 40%]
tests/unit/tf/examples/test_02_dataschema.py . [ 41%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 41%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 41%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 41%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 41%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 41%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 41%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 42%]
tests/unit/tf/inputs/test_continuous.py ..... [ 42%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 47%]
..... [ 48%]
tests/unit/tf/inputs/test_tabular.py .................. [ 50%]
tests/unit/tf/layers/test_queue.py .............. [ 52%]
tests/unit/tf/losses/test_losses.py ....................... [ 56%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 57%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 60%]
tests/unit/tf/models/test_base.py s................ [ 62%]
tests/unit/tf/models/test_benchmark.py .. [ 63%]
tests/unit/tf/models/test_ranking.py .................................. [ 68%]
tests/unit/tf/models/test_retrieval.py ................................ [ 72%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 72%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 75%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 75%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 76%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 76%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 77%]
tests/unit/tf/predictions/test_base.py ..... [ 78%]
tests/unit/tf/predictions/test_classification.py ....... [ 79%]
tests/unit/tf/predictions/test_dot_product.py ........ [ 80%]
tests/unit/tf/predictions/test_regression.py .. [ 80%]
tests/unit/tf/predictions/test_sampling.py .... [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 82%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 84%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 84%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.11) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 13 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/data_augmentation/test_noise.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/core/test_transformations.py: 10 warnings
tests/unit/tf/data_augmentation/test_negative_sampling.py: 10 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/predictions/test_base.py: 5 warnings
tests/unit/tf/predictions/test_classification.py: 7 warnings
tests/unit/tf/predictions/test_dot_product.py: 8 warnings
tests/unit/tf/predictions/test_regression.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py::test_synthetic_aliccp_raw_data
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-True-8]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-10]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-9]
tests/unit/tf/test_dataset.py::test_tf_drp_reset[100-False-8]
tests/unit/tf/test_dataset.py::test_tf_catname_ordering
tests/unit/tf/test_dataset.py::test_tf_map
/usr/local/lib/python3.8/dist-packages/cudf/core/frame.py:384: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/data_augmentation/test_negative_sampling.py: 9 warnings
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/predictions/test_classification.py: 12 warnings
tests/unit/tf/predictions/test_dot_product.py: 2 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_filet6n923z.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
_.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/data_augmentation/test_noise.py::test_stochastic_swap_noise[0.7]
tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_4/sequential_block_3/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/core/transformations.py:1077: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========== 680 passed, 11 skipped, 1043 warnings in 990.52s (0:16:30) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins15102925909240585320.sh

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit c55b32df2b76dba4eb6771a2554745fecfd9abcb, no merge conflicts.
Running as SYSTEM
Setting status of c55b32df2b76dba4eb6771a2554745fecfd9abcb to PENDING with url https://10.20.13.93:8080/job/merlin_models/1301/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse c55b32df2b76dba4eb6771a2554745fecfd9abcb^{commit} # timeout=10
 > git rev-parse origin/c55b32df2b76dba4eb6771a2554745fecfd9abcb^{commit} # timeout=10
 > git rev-parse c55b32df2b76dba4eb6771a2554745fecfd9abcb^{commit} # timeout=10
ERROR: Couldn't find any revision to build. Verify the repository and branch configuration for this job.
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script  : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log" 
[merlin_models] $ /bin/bash /tmp/jenkins2903456058832889539.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit d8782f08784e32dc2d1437eba136aac9374b2218, no merge conflicts.
Running as SYSTEM
Setting status of d8782f08784e32dc2d1437eba136aac9374b2218 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1305/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse d8782f08784e32dc2d1437eba136aac9374b2218^{commit} # timeout=10
Checking out Revision d8782f08784e32dc2d1437eba136aac9374b2218 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f d8782f08784e32dc2d1437eba136aac9374b2218 # timeout=10
Commit message: "Adjusting to use the updated InputBlockV2"
 > git rev-list --no-walk b350c60f8e1c8ee61580862394e3a52ecfe720e5 # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins3214591760436895500.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 695 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py ... [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 26%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 30%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/examples/test_01_getting_started.py . [ 31%]
tests/unit/tf/examples/test_02_dataschema.py . [ 31%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 31%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 32%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 32%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 32%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 32%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 32%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 32%]
tests/unit/tf/inputs/test_continuous.py ..... [ 33%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 38%]
..... [ 38%]
tests/unit/tf/inputs/test_tabular.py .................. [ 41%]
tests/unit/tf/layers/test_queue.py .............. [ 43%]
tests/unit/tf/losses/test_losses.py ....................... [ 46%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 47%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 50%]
tests/unit/tf/models/test_base.py s................. [ 53%]
tests/unit/tf/models/test_benchmark.py .. [ 53%]
tests/unit/tf/models/test_ranking.py ..............FFFF................ [ 58%]
tests/unit/tf/models/test_retrieval.py ................................ [ 63%]
tests/unit/tf/outputs/test_base.py ..... [ 64%]
tests/unit/tf/outputs/test_classification.py ...... [ 64%]
tests/unit/tf/outputs/test_contrastive.py ......... [ 66%]
tests/unit/tf/outputs/test_regression.py .. [ 66%]
tests/unit/tf/outputs/test_sampling.py .... [ 67%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 67%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 69%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 70%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 71%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 71%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 72%]
tests/unit/tf/transforms/test_bias.py .. [ 72%]
tests/unit/tf/transforms/test_features.py s............................. [ 76%]
................. [ 79%]
tests/unit/tf/transforms/test_negative_sampling.py .......... [ 80%]
tests/unit/tf/transforms/test_noise.py ..... [ 81%]
tests/unit/tf/transforms/test_tensor.py .. [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 82%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 84%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 85%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
___________________ test_deepfm_model_only_categ_feats[True] ___________________

self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fd1db6ee540>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: argument of type 'Tags' is not iterable

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

self = SequentialBlock(
(layers): List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): '... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): 'item_category'
(2): 'user_id'
..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build(self, input_shape=None):
    """Builds the sequential block

    Parameters
    ----------
    input_shape : tf.TensorShape, optional
        The input shape, by default None
    """
    self._maybe_propagate_context(input_shape)
  build_sequentially(self, self.layers, input_shape)

merlin/models/tf/core/combinators.py:129:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
            layer.build(input_shape)
        except TypeError:
            t, v, tb = sys.exc_info()
            if isinstance(input_shape, dict) and isinstance(last_layer, TabularBlock):
                v = TypeError(
                    f"Couldn't build {layer}, "
                    f"did you forget to add aggregation to {last_layer}?"
                )
          six.reraise(t, v, tb)

merlin/models/tf/core/combinators.py:780:


tp = <class 'TypeError'>, value = None, tb = None

def reraise(tp, value, tb=None):
    try:
        if value is None:
            value = tp()
        if value.__traceback__ is not tb:
          raise value.with_traceback(tb)

../../../.local/lib/python3.8/site-packages/six.py:702:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fd1db6ee540>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E ), did you forget to add aggregation to ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ITEM: 'item'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02570032, -0.03179889, -0.03721094, ..., 0.01357832,
E -0.02899024, 0.01453921],
E [-0.04399979, -0.00753646, -0.00720472, ..., 0.00024997,
E 0.01565896, -0.02869811],
E [ 0.02614701, -0.03707584, -0.02421354, ..., 0.02661935,
E 0.04587359, -0.00313843],
E ...,
E [-0.01801817, 0.04142593, 0.02776836, ..., -0.01339476,
E -0.04107499, -0.0064096 ],
E [-0.04987758, -0.03873288, -0.030273 , ..., -0.01512405,
E -0.01584437, -0.02466037],
E [-0.0137626 , 0.04578525, -0.02953109, ..., -0.01456221,
E -0.00472139, 0.0140526 ]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[-0.00171213, -0.01755921, 0.02857446, ..., 0.02301586,
E -0.01788441, 0.03623377],
E [ 0.00862727, -0.03390821, 0.00515376, ..., -0.01194553,
E 0.03372042, -0.0090534 ],
E [ 0.00476586, 0.0073352 , -0.02046644, ..., -0.00370144,
E 0.00360762, -0.00312662],
E ...,
E [-0.00357788, 0.03606515, 0.01531923, ..., -0.01783171,
E -0.04155123, 0.01742039],
E [ 0.0191812 , 0.04684928, -0.00022608, ..., 0.02704518,
E 0.01896758, -0.03436905],
E [-0.01931492, 0.0356341 , -0.0218243 , ..., -0.00832837,
E -0.03270398, -0.00795592]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.USER: 'user'>, <Tags.CATEGORICAL: 'categorical'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.03487933, 0.01901175, -0.0284261 , ..., -0.01840165,
E 0.01250464, -0.00717173],
E [ 0.01682284, 0.03735187, 0.01156843, ..., -0.00881452,
E -0.02928144, 0.00875034],
E [-0.00974071, -0.0131126 , 0.03639458, ..., 0.00978114,
E -0.01984922, -0.04836261],
E ...,
E [-0.02130981, -0.0425184 , 0.03600109, ..., 0.0156843 ,
E -0.01499469, 0.018743 ],
E [ 0.01336529, 0.0117893 , 0.02674155, ..., -0.03907121,
E -0.04320374, 0.00016158],
E [-0.02595156, -0.02762016, -0.00011653, ..., 0.01858344,
E -0.03044537, -0.03146877]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7fd1c11d2a00>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_only_categ_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:171:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:717: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1007: in call
self._maybe_build(inputs)
merlin/models/tf/models/base.py:867: in _maybe_build
super()._maybe_build(inputs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:2759: in _maybe_build
self.build(input_shapes) # pylint:disable=not-callable
merlin/models/tf/models/base.py:893: in build
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:703: in reraise
raise value
merlin/models/tf/models/base.py:885: in build
layer.build(input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/tabular.py:313: in build
output_shapes = self.compute_call_output_shape(output_shapes)
merlin/models/tf/core/tabular.py:582: in compute_call_output_shape
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}
merlin/models/tf/core/tabular.py:582: in
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build SequentialBlock(
E (layers): List(
E (0): ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ITEM: 'item'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02570032, -0.03179889, -0.03721094, ..., 0.01357832,
E -0.02899024, 0.01453921],
E [-0.04399979, -0.00753646, -0.00720472, ..., 0.00024997,
E 0.01565896, -0.02869811],
E [ 0.02614701, -0.03707584, -0.02421354, ..., 0.02661935,
E 0.04587359, -0.00313843],
E ...,
E [-0.01801817, 0.04142593, 0.02776836, ..., -0.01339476,
E -0.04107499, -0.0064096 ],
E [-0.04987758, -0.03873288, -0.030273 , ..., -0.01512405,
E -0.01584437, -0.02466037],
E [-0.0137626 , 0.04578525, -0.02953109, ..., -0.01456221,
E -0.00472139, 0.0140526 ]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[-0.00171213, -0.01755921, 0.02857446, ..., 0.02301586,
E -0.01788441, 0.03623377],
E [ 0.00862727, -0.03390821, 0.00515376, ..., -0.01194553,
E 0.03372042, -0.0090534 ],
E [ 0.00476586, 0.0073352 , -0.02046644, ..., -0.00370144,
E 0.00360762, -0.00312662],
E ...,
E [-0.00357788, 0.03606515, 0.01531923, ..., -0.01783171,
E -0.04155123, 0.01742039],
E [ 0.0191812 , 0.04684928, -0.00022608, ..., 0.02704518,
E 0.01896758, -0.03436905],
E [-0.01931492, 0.0356341 , -0.0218243 , ..., -0.00832837,
E -0.03270398, -0.00795592]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.USER: 'user'>, <Tags.CATEGORICAL: 'categorical'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.03487933, 0.01901175, -0.0284261 , ..., -0.01840165,
E 0.01250464, -0.00717173],
E [ 0.01682284, 0.03735187, 0.01156843, ..., -0.00881452,
E -0.02928144, 0.00875034],
E [-0.00974071, -0.0131126 , 0.03639458, ..., 0.00978114,
E -0.01984922, -0.04836261],
E ...,
E [-0.02130981, -0.0425184 , 0.03600109, ..., 0.0156843 ,
E -0.01499469, 0.018743 ],
E [ 0.01336529, 0.0117893 , 0.02674155, ..., -0.03907121,
E -0.04320374, 0.00016158],
E [-0.02595156, -0.02762016, -0.00011653, ..., 0.01858344,
E -0.03044537, -0.03146877]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (1): Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E ), did you forget to add aggregation to Filter(
E (feature_names): List(
E (0): 'item_id'
E (1): 'item_category'
E (2): 'user_id'
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError
__________________ test_deepfm_model_only_categ_feats[False] ___________________

self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fd1b86e6ae0>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: argument of type 'Tags' is not iterable

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

self = SequentialBlock(
(layers): List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): '... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): 'item_category'
(2): 'user_id'
..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build(self, input_shape=None):
    """Builds the sequential block

    Parameters
    ----------
    input_shape : tf.TensorShape, optional
        The input shape, by default None
    """
    self._maybe_propagate_context(input_shape)
  build_sequentially(self, self.layers, input_shape)

merlin/models/tf/core/combinators.py:129:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
            layer.build(input_shape)
        except TypeError:
            t, v, tb = sys.exc_info()
            if isinstance(input_shape, dict) and isinstance(last_layer, TabularBlock):
                v = TypeError(
                    f"Couldn't build {layer}, "
                    f"did you forget to add aggregation to {last_layer}?"
                )
          six.reraise(t, v, tb)

merlin/models/tf/core/combinators.py:780:


tp = <class 'TypeError'>, value = None, tb = None

def reraise(tp, value, tb=None):
    try:
        if value is None:
            value = tp()
        if value.__traceback__ is not tb:
          raise value.with_traceback(tb)

../../../.local/lib/python3.8/site-packages/six.py:702:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fd1b86e6ae0>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E ), did you forget to add aggregation to ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ITEM: 'item'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.03019184, -0.02111769, 0.01050959, ..., -0.0172928 ,
E 0.00180059, -0.045084 ],
E [-0.02480277, -0.01521372, 0.02216304, ..., -0.04987577,
E -0.00641552, -0.02169349],
E [-0.02918756, -0.02672333, -0.02868438, ..., -0.02571416,
E 0.03517436, -0.00967953],
E ...,
E [ 0.03259308, -0.02479436, -0.00282454, ..., -0.02942575,
E -0.01611086, -0.04680791],
E [-0.01399783, -0.01902272, 0.0045493 , ..., -0.04091265,
E 0.02443952, -0.0302704 ],
E [ 0.00040155, 0.01888806, 0.01079621, ..., -0.03706405,
E -0.0367962 , 0.01322072]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[ 0.03009479, 0.03900156, -0.02817373, ..., 0.00038097,
E 0.03044956, -0.04745512],
E [-0.03989201, -0.0167386 , 0.00648766, ..., -0.02095594,
E -0.00021695, 0.03996665],
E [ 0.03626612, 0.04377914, 0.03230296, ..., -0.03567819,
E -0.03715755, 0.0365119 ],
E ...,
E [ 0.04226034, 0.03046766, -0.04382597, ..., -0.02501919,
E -0.01451756, 0.02671708],
E [-0.04766364, -0.03410469, 0.0293314 , ..., 0.03606859,
E -0.0374441 , -0.03671223],
E [-0.02736839, -0.02183306, -0.04594931, ..., -0.01534285,
E 0.00578789, -0.00043597]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.USER: 'user'>, <Tags.CATEGORICAL: 'categorical'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02361941, -0.03001617, 0.01258189, ..., 0.03452733,
E -0.00601999, -0.04937288],
E [-0.02313321, 0.01118822, 0.02567822, ..., 0.00511445,
E -0.026209 , 0.00258926],
E [ 0.02692063, -0.02748579, -0.02021652, ..., 0.03992691,
E -0.00202364, -0.03304167],
E ...,
E [-0.03796924, 0.03143172, 0.02334212, ..., -0.02866315,
E 0.0450588 , -0.01824418],
E [-0.02232841, 0.00791576, -0.02536985, ..., 0.0101485 ,
E -0.00764833, 0.04040624],
E [-0.02146883, -0.00375839, -0.02736113, ..., -0.03641828,
E 0.04995141, 0.02966121]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7fd1c22308b0>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_only_categ_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:171:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:717: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1007: in call
self._maybe_build(inputs)
merlin/models/tf/models/base.py:867: in _maybe_build
super()._maybe_build(inputs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:2759: in _maybe_build
self.build(input_shapes) # pylint:disable=not-callable
merlin/models/tf/models/base.py:893: in build
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:703: in reraise
raise value
merlin/models/tf/models/base.py:885: in build
layer.build(input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/tabular.py:313: in build
output_shapes = self.compute_call_output_shape(output_shapes)
merlin/models/tf/core/tabular.py:582: in compute_call_output_shape
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}
merlin/models/tf/core/tabular.py:582: in
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build SequentialBlock(
E (layers): List(
E (0): ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ITEM: 'item'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.03019184, -0.02111769, 0.01050959, ..., -0.0172928 ,
E 0.00180059, -0.045084 ],
E [-0.02480277, -0.01521372, 0.02216304, ..., -0.04987577,
E -0.00641552, -0.02169349],
E [-0.02918756, -0.02672333, -0.02868438, ..., -0.02571416,
E 0.03517436, -0.00967953],
E ...,
E [ 0.03259308, -0.02479436, -0.00282454, ..., -0.02942575,
E -0.01611086, -0.04680791],
E [-0.01399783, -0.01902272, 0.0045493 , ..., -0.04091265,
E 0.02443952, -0.0302704 ],
E [ 0.00040155, 0.01888806, 0.01079621, ..., -0.03706405,
E -0.0367962 , 0.01322072]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[ 0.03009479, 0.03900156, -0.02817373, ..., 0.00038097,
E 0.03044956, -0.04745512],
E [-0.03989201, -0.0167386 , 0.00648766, ..., -0.02095594,
E -0.00021695, 0.03996665],
E [ 0.03626612, 0.04377914, 0.03230296, ..., -0.03567819,
E -0.03715755, 0.0365119 ],
E ...,
E [ 0.04226034, 0.03046766, -0.04382597, ..., -0.02501919,
E -0.01451756, 0.02671708],
E [-0.04766364, -0.03410469, 0.0293314 , ..., 0.03606859,
E -0.0374441 , -0.03671223],
E [-0.02736839, -0.02183306, -0.04594931, ..., -0.01534285,
E 0.00578789, -0.00043597]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.USER: 'user'>, <Tags.CATEGORICAL: 'categorical'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02361941, -0.03001617, 0.01258189, ..., 0.03452733,
E -0.00601999, -0.04937288],
E [-0.02313321, 0.01118822, 0.02567822, ..., 0.00511445,
E -0.026209 , 0.00258926],
E [ 0.02692063, -0.02748579, -0.02021652, ..., 0.03992691,
E -0.00202364, -0.03304167],
E ...,
E [-0.03796924, 0.03143172, 0.02334212, ..., -0.02866315,
E 0.0450588 , -0.01824418],
E [-0.02232841, 0.00791576, -0.02536985, ..., 0.0101485 ,
E -0.00764833, 0.04040624],
E [-0.02146883, -0.00375839, -0.02736113, ..., -0.03641828,
E 0.04995141, 0.02966121]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (1): Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E ), did you forget to add aggregation to Filter(
E (feature_names): List(
E (0): 'item_id'
E (1): 'item_category'
E (2): 'user_id'
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError
______________ test_deepfm_model_categ_and_continuous_feats[True] ______________

self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fd1c32c08b0>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: argument of type 'Tags' is not iterable

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

self = SequentialBlock(
(layers): List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): '...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): 'item_category'
(2): 'user_id'
... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build(self, input_shape=None):
    """Builds the sequential block

    Parameters
    ----------
    input_shape : tf.TensorShape, optional
        The input shape, by default None
    """
    self._maybe_propagate_context(input_shape)
  build_sequentially(self, self.layers, input_shape)

merlin/models/tf/core/combinators.py:129:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
            layer.build(input_shape)
        except TypeError:
            t, v, tb = sys.exc_info()
            if isinstance(input_shape, dict) and isinstance(last_layer, TabularBlock):
                v = TypeError(
                    f"Couldn't build {layer}, "
                    f"did you forget to add aggregation to {last_layer}?"
                )
          six.reraise(t, v, tb)

merlin/models/tf/core/combinators.py:780:


tp = <class 'TypeError'>, value = None, tb = None

def reraise(tp, value, tb=None):
    try:
        if value is None:
            value = tp()
        if value.__traceback__ is not tb:
          raise value.with_traceback(tb)

../../../.local/lib/python3.8/site-packages/six.py:702:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fd1c32c08b0>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E ), did you forget to add aggregation to ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ITEM: 'item'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02054526, -0.02200582, 0.01605621, ..., 0.02267637,
E 0.01410104, -0.00604937],
E [-0.01742599, 0.04021536, -0.02411727, ..., 0.01386498,
E 0.03677467, 0.01996145],
E [ 0.00269372, -0.03342003, -0.04908162, ..., 0.04889261,
E 0.01890564, 0.03716913],
E ...,
E [-0.03502411, -0.03122339, 0.03508973, ..., -0.03769619,
E -0.03154981, -0.01263622],
E [-0.04772219, 0.01338226, 0.00763812, ..., -0.00482909,
E -0.0085258 , 0.00261904],
E [-0.01138081, -0.04893042, 0.00227406, ..., 0.00839734,
E -0.04513972, 0.00374116]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[-0.01045848, 0.04803827, -0.01155933, ..., -0.04183086,
E 0.04058427, -0.01171237],
E [ 0.04373075, 0.00500919, 0.04929531, ..., -0.03939838,
E 0.00203785, -0.00989903],
E [ 0.04189702, -0.03120966, -0.00791519, ..., 0.00858233,
E 0.01543109, 0.01549872],
E ...,
E [ 0.00184605, 0.0434628 , -0.02273744, ..., 0.04810428,
E -0.02350572, -0.00790094],
E [ 0.04012373, -0.03322871, 0.0497464 , ..., -0.01007406,
E -0.00045862, 0.00152151],
E [-0.02862725, -0.01967434, -0.00228672, ..., 0.00557036,
E -0.03056601, -0.03539287]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.USER: 'user'>, <Tags.CATEGORICAL: 'categorical'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.00854038, -0.00232061, 0.00140734, ..., 0.04226798,
E -0.03381544, -0.02190081],
E [ 0.03141161, 0.01365862, -0.01205138, ..., 0.0265334 ,
E -0.03197428, 0.03824944],
E [-0.03385307, -0.0046339 , 0.01335522, ..., 0.04823765,
E 0.04733351, 0.02277596],
E ...,
E [-0.04601424, 0.01994986, 0.04879292, ..., -0.00574926,
E 0.04242582, -0.00810363],
E [ 0.01967518, -0.03233056, -0.00338495, ..., 0.04127854,
E -0.02612364, 0.03764292],
E [-0.0034511 , 0.04024262, 0.02186977, ..., 0.04592418,
E -0.00961372, 0.02344227]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7fd1ba555c40>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_categ_and_continuous_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "user_age", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:186:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:717: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1007: in call
self._maybe_build(inputs)
merlin/models/tf/models/base.py:867: in _maybe_build
super()._maybe_build(inputs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:2759: in _maybe_build
self.build(input_shapes) # pylint:disable=not-callable
merlin/models/tf/models/base.py:893: in build
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:703: in reraise
raise value
merlin/models/tf/models/base.py:885: in build
layer.build(input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/tabular.py:313: in build
output_shapes = self.compute_call_output_shape(output_shapes)
merlin/models/tf/core/tabular.py:582: in compute_call_output_shape
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}
merlin/models/tf/core/tabular.py:582: in
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build SequentialBlock(
E (layers): List(
E (0): ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ITEM: 'item'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02054526, -0.02200582, 0.01605621, ..., 0.02267637,
E 0.01410104, -0.00604937],
E [-0.01742599, 0.04021536, -0.02411727, ..., 0.01386498,
E 0.03677467, 0.01996145],
E [ 0.00269372, -0.03342003, -0.04908162, ..., 0.04889261,
E 0.01890564, 0.03716913],
E ...,
E [-0.03502411, -0.03122339, 0.03508973, ..., -0.03769619,
E -0.03154981, -0.01263622],
E [-0.04772219, 0.01338226, 0.00763812, ..., -0.00482909,
E -0.0085258 , 0.00261904],
E [-0.01138081, -0.04893042, 0.00227406, ..., 0.00839734,
E -0.04513972, 0.00374116]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[-0.01045848, 0.04803827, -0.01155933, ..., -0.04183086,
E 0.04058427, -0.01171237],
E [ 0.04373075, 0.00500919, 0.04929531, ..., -0.03939838,
E 0.00203785, -0.00989903],
E [ 0.04189702, -0.03120966, -0.00791519, ..., 0.00858233,
E 0.01543109, 0.01549872],
E ...,
E [ 0.00184605, 0.0434628 , -0.02273744, ..., 0.04810428,
E -0.02350572, -0.00790094],
E [ 0.04012373, -0.03322871, 0.0497464 , ..., -0.01007406,
E -0.00045862, 0.00152151],
E [-0.02862725, -0.01967434, -0.00228672, ..., 0.00557036,
E -0.03056601, -0.03539287]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.USER: 'user'>, <Tags.CATEGORICAL: 'categorical'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.00854038, -0.00232061, 0.00140734, ..., 0.04226798,
E -0.03381544, -0.02190081],
E [ 0.03141161, 0.01365862, -0.01205138, ..., 0.0265334 ,
E -0.03197428, 0.03824944],
E [-0.03385307, -0.0046339 , 0.01335522, ..., 0.04823765,
E 0.04733351, 0.02277596],
E ...,
E [-0.04601424, 0.01994986, 0.04879292, ..., -0.00574926,
E 0.04242582, -0.00810363],
E [ 0.01967518, -0.03233056, -0.00338495, ..., 0.04127854,
E -0.02612364, 0.03764292],
E [-0.0034511 , 0.04024262, 0.02186977, ..., 0.04592418,
E -0.00961372, 0.02344227]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (1): Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E ), did you forget to add aggregation to Filter(
E (feature_names): List(
E (0): 'item_id'
E (1): 'item_category'
E (2): 'user_id'
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError
_____________ test_deepfm_model_categ_and_continuous_feats[False] ______________

self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fd1d07e0bd0>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: argument of type 'Tags' is not iterable

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

self = SequentialBlock(
(layers): List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): '...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): 'item_category'
(2): 'user_id'
... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build(self, input_shape=None):
    """Builds the sequential block

    Parameters
    ----------
    input_shape : tf.TensorShape, optional
        The input shape, by default None
    """
    self._maybe_propagate_context(input_shape)
  build_sequentially(self, self.layers, input_shape)

merlin/models/tf/core/combinators.py:129:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
            layer.build(input_shape)
        except TypeError:
            t, v, tb = sys.exc_info()
            if isinstance(input_shape, dict) and isinstance(last_layer, TabularBlock):
                v = TypeError(
                    f"Couldn't build {layer}, "
                    f"did you forget to add aggregation to {last_layer}?"
                )
          six.reraise(t, v, tb)

merlin/models/tf/core/combinators.py:780:


tp = <class 'TypeError'>, value = None, tb = None

def reraise(tp, value, tb=None):
    try:
        if value is None:
            value = tp()
        if value.__traceback__ is not tb:
          raise value.with_traceback(tb)

../../../.local/lib/python3.8/site-packages/six.py:702:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fd1d07e0bd0>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E ), did you forget to add aggregation to ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ITEM: 'item'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.00095159, -0.00780176, -0.01070255, ..., -0.0245008 ,
E -0.00115737, -0.02886876],
E [-0.012412 , 0.0399057 , 0.04028242, ..., -0.04560817,
E 0.00926073, -0.00812765],
E [-0.0196968 , 0.025419 , 0.00797405, ..., 0.04023785,
E -0.0137983 , -0.03792955],
E ...,
E [ 0.00676475, 0.00952966, -0.01445646, ..., -0.00510135,
E -0.04126855, -0.04733052],
E [-0.02780222, -0.03965948, 0.04582984, ..., -0.02401545,
E -0.04722666, 0.02736635],
E [-0.02108997, 0.00042298, 0.04603405, ..., -0.03562424,
E 0.01985692, 0.0199568 ]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[ 0.03682753, -0.00775306, 0.00989783, ..., 0.04243133,
E 0.02013233, 0.0142022 ],
E [ 0.03489197, -0.00297128, -0.02943789, ..., 0.02520556,
E 0.04500607, -0.03690022],
E [-0.00748165, -0.01145886, 0.02205738, ..., 0.026886 ,
E -0.00793109, 0.03001534],
E ...,
E [ 0.03647201, -0.02389548, 0.03249105, ..., -0.04324914,
E 0.02238042, -0.01105363],
E [-0.0121203 , -0.02577748, -0.03620006, ..., 0.01091584,
E -0.0377016 , 0.04898537],
E [-0.03664935, 0.01268804, 0.03041216, ..., -0.04956597,
E 0.00527128, -0.00412567]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.USER: 'user'>, <Tags.CATEGORICAL: 'categorical'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.03058578, -0.00037177, -0.04440222, ..., -0.0143258 ,
E 0.0078275 , -0.02509269],
E [ 0.00060018, 0.02608771, 0.04238523, ..., 0.01086897,
E 0.02085884, 0.01716653],
E [-0.02398338, -0.0076279 , -0.0233204 , ..., -0.02491845,
E -0.01284405, 0.04893791],
E ...,
E [ 0.00118636, -0.02557309, 0.04411072, ..., 0.01656295,
E -0.00796812, -0.03289662],
E [ 0.02188356, -0.01895338, -0.01765849, ..., -0.038915 ,
E -0.02888047, 0.01209231],
E [ 0.01988271, 0.04559529, -0.0447374 , ..., 0.0016938 ,
E -0.01167984, -0.00597036]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7fd1c163a970>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_categ_and_continuous_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "user_age", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:186:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:717: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1007: in call
self._maybe_build(inputs)
merlin/models/tf/models/base.py:867: in _maybe_build
super()._maybe_build(inputs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:2759: in _maybe_build
self.build(input_shapes) # pylint:disable=not-callable
merlin/models/tf/models/base.py:893: in build
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:703: in reraise
raise value
merlin/models/tf/models/base.py:885: in build
layer.build(input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/tabular.py:313: in build
output_shapes = self.compute_call_output_shape(output_shapes)
merlin/models/tf/core/tabular.py:582: in compute_call_output_shape
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}
merlin/models/tf/core/tabular.py:582: in
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build SequentialBlock(
E (layers): List(
E (0): ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ITEM: 'item'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.00095159, -0.00780176, -0.01070255, ..., -0.0245008 ,
E -0.00115737, -0.02886876],
E [-0.012412 , 0.0399057 , 0.04028242, ..., -0.04560817,
E 0.00926073, -0.00812765],
E [-0.0196968 , 0.025419 , 0.00797405, ..., 0.04023785,
E -0.0137983 , -0.03792955],
E ...,
E [ 0.00676475, 0.00952966, -0.01445646, ..., -0.00510135,
E -0.04126855, -0.04733052],
E [-0.02780222, -0.03965948, 0.04582984, ..., -0.02401545,
E -0.04722666, 0.02736635],
E [-0.02108997, 0.00042298, 0.04603405, ..., -0.03562424,
E 0.01985692, 0.0199568 ]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[ 0.03682753, -0.00775306, 0.00989783, ..., 0.04243133,
E 0.02013233, 0.0142022 ],
E [ 0.03489197, -0.00297128, -0.02943789, ..., 0.02520556,
E 0.04500607, -0.03690022],
E [-0.00748165, -0.01145886, 0.02205738, ..., 0.026886 ,
E -0.00793109, 0.03001534],
E ...,
E [ 0.03647201, -0.02389548, 0.03249105, ..., -0.04324914,
E 0.02238042, -0.01105363],
E [-0.0121203 , -0.02577748, -0.03620006, ..., 0.01091584,
E -0.0377016 , 0.04898537],
E [-0.03664935, 0.01268804, 0.03041216, ..., -0.04956597,
E 0.00527128, -0.00412567]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.USER: 'user'>, <Tags.CATEGORICAL: 'categorical'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.03058578, -0.00037177, -0.04440222, ..., -0.0143258 ,
E 0.0078275 , -0.02509269],
E [ 0.00060018, 0.02608771, 0.04238523, ..., 0.01086897,
E 0.02085884, 0.01716653],
E [-0.02398338, -0.0076279 , -0.0233204 , ..., -0.02491845,
E -0.01284405, 0.04893791],
E ...,
E [ 0.00118636, -0.02557309, 0.04411072, ..., 0.01656295,
E -0.00796812, -0.03289662],
E [ 0.02188356, -0.01895338, -0.01765849, ..., -0.038915 ,
E -0.02888047, 0.01209231],
E [ 0.01988271, 0.04559529, -0.0447374 , ..., 0.0016938 ,
E -0.01167984, -0.00597036]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (1): Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E ), did you forget to add aggregation to Filter(
E (feature_names): List(
E (0): 'item_id'
E (1): 'item_category'
E (2): 'user_id'
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 9 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 9 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/outputs/test_contrastive.py: 2 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_filewsv8v6ax.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:566: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
===== 4 failed, 680 passed, 11 skipped, 1045 warnings in 975.70s (0:16:15) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins9793061876429141845.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit db03880e65b5ddd7714c516cb06d6e797fae062c, no merge conflicts.
Running as SYSTEM
Setting status of db03880e65b5ddd7714c516cb06d6e797fae062c to PENDING with url https://10.20.13.93:8080/job/merlin_models/1307/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse db03880e65b5ddd7714c516cb06d6e797fae062c^{commit} # timeout=10
Checking out Revision db03880e65b5ddd7714c516cb06d6e797fae062c (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f db03880e65b5ddd7714c516cb06d6e797fae062c # timeout=10
Commit message: "Fixed formating issue"
 > git rev-list --no-walk d6d33101186fd7c07ba304b1d944d49dcc8dbffc # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins7522390991320777298.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 695 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py ... [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 26%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 30%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/examples/test_01_getting_started.py . [ 31%]
tests/unit/tf/examples/test_02_dataschema.py . [ 31%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 31%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 32%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 32%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 32%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 32%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 32%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 32%]
tests/unit/tf/inputs/test_continuous.py ..... [ 33%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 38%]
..... [ 38%]
tests/unit/tf/inputs/test_tabular.py .................. [ 41%]
tests/unit/tf/layers/test_queue.py .............. [ 43%]
tests/unit/tf/losses/test_losses.py ....................... [ 46%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 47%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 50%]
tests/unit/tf/models/test_base.py s................. [ 53%]
tests/unit/tf/models/test_benchmark.py .. [ 53%]
tests/unit/tf/models/test_ranking.py ..............FFFF................ [ 58%]
tests/unit/tf/models/test_retrieval.py ................................ [ 63%]
tests/unit/tf/outputs/test_base.py ..... [ 64%]
tests/unit/tf/outputs/test_classification.py ...... [ 64%]
tests/unit/tf/outputs/test_contrastive.py ......... [ 66%]
tests/unit/tf/outputs/test_regression.py .. [ 66%]
tests/unit/tf/outputs/test_sampling.py .... [ 67%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 67%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 69%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 70%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 71%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 71%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 72%]
tests/unit/tf/transforms/test_bias.py .. [ 72%]
tests/unit/tf/transforms/test_features.py s............................. [ 76%]
................. [ 79%]
tests/unit/tf/transforms/test_negative_sampling.py .......... [ 80%]
tests/unit/tf/transforms/test_noise.py ..... [ 81%]
tests/unit/tf/transforms/test_tensor.py .. [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 82%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 84%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 85%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=================================== FAILURES ===================================
___________________ test_deepfm_model_only_categ_feats[True] ___________________

self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fdcf1748860>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: argument of type 'Tags' is not iterable

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

self = SequentialBlock(
(layers): List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): '... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): 'item_category'
(2): 'user_id'
..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build(self, input_shape=None):
    """Builds the sequential block

    Parameters
    ----------
    input_shape : tf.TensorShape, optional
        The input shape, by default None
    """
    self._maybe_propagate_context(input_shape)
  build_sequentially(self, self.layers, input_shape)

merlin/models/tf/core/combinators.py:129:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
            layer.build(input_shape)
        except TypeError:
            t, v, tb = sys.exc_info()
            if isinstance(input_shape, dict) and isinstance(last_layer, TabularBlock):
                v = TypeError(
                    f"Couldn't build {layer}, "
                    f"did you forget to add aggregation to {last_layer}?"
                )
          six.reraise(t, v, tb)

merlin/models/tf/core/combinators.py:780:


tp = <class 'TypeError'>, value = None, tb = None

def reraise(tp, value, tb=None):
    try:
        if value is None:
            value = tp()
        if value.__traceback__ is not tb:
          raise value.with_traceback(tb)

../../../.local/lib/python3.8/site-packages/six.py:702:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fdcf1748860>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E ), did you forget to add aggregation to ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02570032, -0.03179889, -0.03721094, ..., 0.01357832,
E -0.02899024, 0.01453921],
E [-0.04399979, -0.00753646, -0.00720472, ..., 0.00024997,
E 0.01565896, -0.02869811],
E [ 0.02614701, -0.03707584, -0.02421354, ..., 0.02661935,
E 0.04587359, -0.00313843],
E ...,
E [-0.01801817, 0.04142593, 0.02776836, ..., -0.01339476,
E -0.04107499, -0.0064096 ],
E [-0.04987758, -0.03873288, -0.030273 , ..., -0.01512405,
E -0.01584437, -0.02466037],
E [-0.0137626 , 0.04578525, -0.02953109, ..., -0.01456221,
E -0.00472139, 0.0140526 ]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[-0.00171213, -0.01755921, 0.02857446, ..., 0.02301586,
E -0.01788441, 0.03623377],
E [ 0.00862727, -0.03390821, 0.00515376, ..., -0.01194553,
E 0.03372042, -0.0090534 ],
E [ 0.00476586, 0.0073352 , -0.02046644, ..., -0.00370144,
E 0.00360762, -0.00312662],
E ...,
E [-0.00357788, 0.03606515, 0.01531923, ..., -0.01783171,
E -0.04155123, 0.01742039],
E [ 0.0191812 , 0.04684928, -0.00022608, ..., 0.02704518,
E 0.01896758, -0.03436905],
E [-0.01931492, 0.0356341 , -0.0218243 , ..., -0.00832837,
E -0.03270398, -0.00795592]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.USER: 'user'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.03487933, 0.01901175, -0.0284261 , ..., -0.01840165,
E 0.01250464, -0.00717173],
E [ 0.01682284, 0.03735187, 0.01156843, ..., -0.00881452,
E -0.02928144, 0.00875034],
E [-0.00974071, -0.0131126 , 0.03639458, ..., 0.00978114,
E -0.01984922, -0.04836261],
E ...,
E [-0.02130981, -0.0425184 , 0.03600109, ..., 0.0156843 ,
E -0.01499469, 0.018743 ],
E [ 0.01336529, 0.0117893 , 0.02674155, ..., -0.03907121,
E -0.04320374, 0.00016158],
E [-0.02595156, -0.02762016, -0.00011653, ..., 0.01858344,
E -0.03044537, -0.03146877]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7fdcf1f669a0>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_only_categ_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:171:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:717: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1007: in call
self._maybe_build(inputs)
merlin/models/tf/models/base.py:867: in _maybe_build
super()._maybe_build(inputs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:2759: in _maybe_build
self.build(input_shapes) # pylint:disable=not-callable
merlin/models/tf/models/base.py:893: in build
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:703: in reraise
raise value
merlin/models/tf/models/base.py:885: in build
layer.build(input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/tabular.py:313: in build
output_shapes = self.compute_call_output_shape(output_shapes)
merlin/models/tf/core/tabular.py:582: in compute_call_output_shape
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}
merlin/models/tf/core/tabular.py:582: in
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build SequentialBlock(
E (layers): List(
E (0): ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02570032, -0.03179889, -0.03721094, ..., 0.01357832,
E -0.02899024, 0.01453921],
E [-0.04399979, -0.00753646, -0.00720472, ..., 0.00024997,
E 0.01565896, -0.02869811],
E [ 0.02614701, -0.03707584, -0.02421354, ..., 0.02661935,
E 0.04587359, -0.00313843],
E ...,
E [-0.01801817, 0.04142593, 0.02776836, ..., -0.01339476,
E -0.04107499, -0.0064096 ],
E [-0.04987758, -0.03873288, -0.030273 , ..., -0.01512405,
E -0.01584437, -0.02466037],
E [-0.0137626 , 0.04578525, -0.02953109, ..., -0.01456221,
E -0.00472139, 0.0140526 ]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[-0.00171213, -0.01755921, 0.02857446, ..., 0.02301586,
E -0.01788441, 0.03623377],
E [ 0.00862727, -0.03390821, 0.00515376, ..., -0.01194553,
E 0.03372042, -0.0090534 ],
E [ 0.00476586, 0.0073352 , -0.02046644, ..., -0.00370144,
E 0.00360762, -0.00312662],
E ...,
E [-0.00357788, 0.03606515, 0.01531923, ..., -0.01783171,
E -0.04155123, 0.01742039],
E [ 0.0191812 , 0.04684928, -0.00022608, ..., 0.02704518,
E 0.01896758, -0.03436905],
E [-0.01931492, 0.0356341 , -0.0218243 , ..., -0.00832837,
E -0.03270398, -0.00795592]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.USER: 'user'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.03487933, 0.01901175, -0.0284261 , ..., -0.01840165,
E 0.01250464, -0.00717173],
E [ 0.01682284, 0.03735187, 0.01156843, ..., -0.00881452,
E -0.02928144, 0.00875034],
E [-0.00974071, -0.0131126 , 0.03639458, ..., 0.00978114,
E -0.01984922, -0.04836261],
E ...,
E [-0.02130981, -0.0425184 , 0.03600109, ..., 0.0156843 ,
E -0.01499469, 0.018743 ],
E [ 0.01336529, 0.0117893 , 0.02674155, ..., -0.03907121,
E -0.04320374, 0.00016158],
E [-0.02595156, -0.02762016, -0.00011653, ..., 0.01858344,
E -0.03044537, -0.03146877]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (1): Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E ), did you forget to add aggregation to Filter(
E (feature_names): List(
E (0): 'item_id'
E (1): 'item_category'
E (2): 'user_id'
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError
__________________ test_deepfm_model_only_categ_feats[False] ___________________

self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fdcdb739770>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: argument of type 'Tags' is not iterable

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

self = SequentialBlock(
(layers): List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): '... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): 'item_category'
(2): 'user_id'
..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build(self, input_shape=None):
    """Builds the sequential block

    Parameters
    ----------
    input_shape : tf.TensorShape, optional
        The input shape, by default None
    """
    self._maybe_propagate_context(input_shape)
  build_sequentially(self, self.layers, input_shape)

merlin/models/tf/core/combinators.py:129:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
            layer.build(input_shape)
        except TypeError:
            t, v, tb = sys.exc_info()
            if isinstance(input_shape, dict) and isinstance(last_layer, TabularBlock):
                v = TypeError(
                    f"Couldn't build {layer}, "
                    f"did you forget to add aggregation to {last_layer}?"
                )
          six.reraise(t, v, tb)

merlin/models/tf/core/combinators.py:780:


tp = <class 'TypeError'>, value = None, tb = None

def reraise(tp, value, tb=None):
    try:
        if value is None:
            value = tp()
        if value.__traceback__ is not tb:
          raise value.with_traceback(tb)

../../../.local/lib/python3.8/site-packages/six.py:702:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)..._feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fdcdb739770>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E ), did you forget to add aggregation to ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.03019184, -0.02111769, 0.01050959, ..., -0.0172928 ,
E 0.00180059, -0.045084 ],
E [-0.02480277, -0.01521372, 0.02216304, ..., -0.04987577,
E -0.00641552, -0.02169349],
E [-0.02918756, -0.02672333, -0.02868438, ..., -0.02571416,
E 0.03517436, -0.00967953],
E ...,
E [ 0.03259308, -0.02479436, -0.00282454, ..., -0.02942575,
E -0.01611086, -0.04680791],
E [-0.01399783, -0.01902272, 0.0045493 , ..., -0.04091265,
E 0.02443952, -0.0302704 ],
E [ 0.00040155, 0.01888806, 0.01079621, ..., -0.03706405,
E -0.0367962 , 0.01322072]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[ 0.03009479, 0.03900156, -0.02817373, ..., 0.00038097,
E 0.03044956, -0.04745512],
E [-0.03989201, -0.0167386 , 0.00648766, ..., -0.02095594,
E -0.00021695, 0.03996665],
E [ 0.03626612, 0.04377914, 0.03230296, ..., -0.03567819,
E -0.03715755, 0.0365119 ],
E ...,
E [ 0.04226034, 0.03046766, -0.04382597, ..., -0.02501919,
E -0.01451756, 0.02671708],
E [-0.04766364, -0.03410469, 0.0293314 , ..., 0.03606859,
E -0.0374441 , -0.03671223],
E [-0.02736839, -0.02183306, -0.04594931, ..., -0.01534285,
E 0.00578789, -0.00043597]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.USER: 'user'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02361941, -0.03001617, 0.01258189, ..., 0.03452733,
E -0.00601999, -0.04937288],
E [-0.02313321, 0.01118822, 0.02567822, ..., 0.00511445,
E -0.026209 , 0.00258926],
E [ 0.02692063, -0.02748579, -0.02021652, ..., 0.03992691,
E -0.00202364, -0.03304167],
E ...,
E [-0.03796924, 0.03143172, 0.02334212, ..., -0.02866315,
E 0.0450588 , -0.01824418],
E [-0.02232841, 0.00791576, -0.02536985, ..., 0.0101485 ,
E -0.00764833, 0.04040624],
E [-0.02146883, -0.00375839, -0.02736113, ..., -0.03641828,
E 0.04995141, 0.02966121]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7fdcd99d1c40>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_only_categ_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:171:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:717: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1007: in call
self._maybe_build(inputs)
merlin/models/tf/models/base.py:867: in _maybe_build
super()._maybe_build(inputs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:2759: in _maybe_build
self.build(input_shapes) # pylint:disable=not-callable
merlin/models/tf/models/base.py:893: in build
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:703: in reraise
raise value
merlin/models/tf/models/base.py:885: in build
layer.build(input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/tabular.py:313: in build
output_shapes = self.compute_call_output_shape(output_shapes)
merlin/models/tf/core/tabular.py:582: in compute_call_output_shape
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}
merlin/models/tf/core/tabular.py:582: in
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(... 1])
)
(_feature_dtypes): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build SequentialBlock(
E (layers): List(
E (0): ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.03019184, -0.02111769, 0.01050959, ..., -0.0172928 ,
E 0.00180059, -0.045084 ],
E [-0.02480277, -0.01521372, 0.02216304, ..., -0.04987577,
E -0.00641552, -0.02169349],
E [-0.02918756, -0.02672333, -0.02868438, ..., -0.02571416,
E 0.03517436, -0.00967953],
E ...,
E [ 0.03259308, -0.02479436, -0.00282454, ..., -0.02942575,
E -0.01611086, -0.04680791],
E [-0.01399783, -0.01902272, 0.0045493 , ..., -0.04091265,
E 0.02443952, -0.0302704 ],
E [ 0.00040155, 0.01888806, 0.01079621, ..., -0.03706405,
E -0.0367962 , 0.01322072]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[ 0.03009479, 0.03900156, -0.02817373, ..., 0.00038097,
E 0.03044956, -0.04745512],
E [-0.03989201, -0.0167386 , 0.00648766, ..., -0.02095594,
E -0.00021695, 0.03996665],
E [ 0.03626612, 0.04377914, 0.03230296, ..., -0.03567819,
E -0.03715755, 0.0365119 ],
E ...,
E [ 0.04226034, 0.03046766, -0.04382597, ..., -0.02501919,
E -0.01451756, 0.02671708],
E [-0.04766364, -0.03410469, 0.0293314 , ..., 0.03606859,
E -0.0374441 , -0.03671223],
E [-0.02736839, -0.02183306, -0.04594931, ..., -0.01534285,
E 0.00578789, -0.00043597]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.USER: 'user'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02361941, -0.03001617, 0.01258189, ..., 0.03452733,
E -0.00601999, -0.04937288],
E [-0.02313321, 0.01118822, 0.02567822, ..., 0.00511445,
E -0.026209 , 0.00258926],
E [ 0.02692063, -0.02748579, -0.02021652, ..., 0.03992691,
E -0.00202364, -0.03304167],
E ...,
E [-0.03796924, 0.03143172, 0.02334212, ..., -0.02866315,
E 0.0450588 , -0.01824418],
E [-0.02232841, 0.00791576, -0.02536985, ..., 0.0101485 ,
E -0.00764833, 0.04040624],
E [-0.02146883, -0.00375839, -0.02736113, ..., -0.03641828,
E 0.04995141, 0.02966121]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E (1): Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E ), did you forget to add aggregation to Filter(
E (feature_names): List(
E (0): 'item_id'
E (1): 'item_category'
E (2): 'user_id'
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError
______________ test_deepfm_model_categ_and_continuous_feats[True] ______________

self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fdcf23be3b0>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: argument of type 'Tags' is not iterable

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

self = SequentialBlock(
(layers): List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): '...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): 'item_category'
(2): 'user_id'
... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build(self, input_shape=None):
    """Builds the sequential block

    Parameters
    ----------
    input_shape : tf.TensorShape, optional
        The input shape, by default None
    """
    self._maybe_propagate_context(input_shape)
  build_sequentially(self, self.layers, input_shape)

merlin/models/tf/core/combinators.py:129:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
            layer.build(input_shape)
        except TypeError:
            t, v, tb = sys.exc_info()
            if isinstance(input_shape, dict) and isinstance(last_layer, TabularBlock):
                v = TypeError(
                    f"Couldn't build {layer}, "
                    f"did you forget to add aggregation to {last_layer}?"
                )
          six.reraise(t, v, tb)

merlin/models/tf/core/combinators.py:780:


tp = <class 'TypeError'>, value = None, tb = None

def reraise(tp, value, tb=None):
    try:
        if value is None:
            value = tp()
        if value.__traceback__ is not tb:
          raise value.with_traceback(tb)

../../../.local/lib/python3.8/site-packages/six.py:702:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fdcf23be3b0>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E ), did you forget to add aggregation to ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02054526, -0.02200582, 0.01605621, ..., 0.02267637,
E 0.01410104, -0.00604937],
E [-0.01742599, 0.04021536, -0.02411727, ..., 0.01386498,
E 0.03677467, 0.01996145],
E [ 0.00269372, -0.03342003, -0.04908162, ..., 0.04889261,
E 0.01890564, 0.03716913],
E ...,
E [-0.03502411, -0.03122339, 0.03508973, ..., -0.03769619,
E -0.03154981, -0.01263622],
E [-0.04772219, 0.01338226, 0.00763812, ..., -0.00482909,
E -0.0085258 , 0.00261904],
E [-0.01138081, -0.04893042, 0.00227406, ..., 0.00839734,
E -0.04513972, 0.00374116]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[-0.01045848, 0.04803827, -0.01155933, ..., -0.04183086,
E 0.04058427, -0.01171237],
E [ 0.04373075, 0.00500919, 0.04929531, ..., -0.03939838,
E 0.00203785, -0.00989903],
E [ 0.04189702, -0.03120966, -0.00791519, ..., 0.00858233,
E 0.01543109, 0.01549872],
E ...,
E [ 0.00184605, 0.0434628 , -0.02273744, ..., 0.04810428,
E -0.02350572, -0.00790094],
E [ 0.04012373, -0.03322871, 0.0497464 , ..., -0.01007406,
E -0.00045862, 0.00152151],
E [-0.02862725, -0.01967434, -0.00228672, ..., 0.00557036,
E -0.03056601, -0.03539287]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.USER: 'user'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.00854038, -0.00232061, 0.00140734, ..., 0.04226798,
E -0.03381544, -0.02190081],
E [ 0.03141161, 0.01365862, -0.01205138, ..., 0.0265334 ,
E -0.03197428, 0.03824944],
E [-0.03385307, -0.0046339 , 0.01335522, ..., 0.04823765,
E 0.04733351, 0.02277596],
E ...,
E [-0.04601424, 0.01994986, 0.04879292, ..., -0.00574926,
E 0.04242582, -0.00810363],
E [ 0.01967518, -0.03233056, -0.00338495, ..., 0.04127854,
E -0.02612364, 0.03764292],
E [-0.0034511 , 0.04024262, 0.02186977, ..., 0.04592418,
E -0.00961372, 0.02344227]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7fdcf197c670>
run_eagerly = True

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_categ_and_continuous_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "user_age", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:186:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:717: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1007: in call
self._maybe_build(inputs)
merlin/models/tf/models/base.py:867: in _maybe_build
super()._maybe_build(inputs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:2759: in _maybe_build
self.build(input_shapes) # pylint:disable=not-callable
merlin/models/tf/models/base.py:893: in build
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:703: in reraise
raise value
merlin/models/tf/models/base.py:885: in build
layer.build(input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/tabular.py:313: in build
output_shapes = self.compute_call_output_shape(output_shapes)
merlin/models/tf/core/tabular.py:582: in compute_call_output_shape
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}
merlin/models/tf/core/tabular.py:582: in
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build SequentialBlock(
E (layers): List(
E (0): ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.02054526, -0.02200582, 0.01605621, ..., 0.02267637,
E 0.01410104, -0.00604937],
E [-0.01742599, 0.04021536, -0.02411727, ..., 0.01386498,
E 0.03677467, 0.01996145],
E [ 0.00269372, -0.03342003, -0.04908162, ..., 0.04889261,
E 0.01890564, 0.03716913],
E ...,
E [-0.03502411, -0.03122339, 0.03508973, ..., -0.03769619,
E -0.03154981, -0.01263622],
E [-0.04772219, 0.01338226, 0.00763812, ..., -0.00482909,
E -0.0085258 , 0.00261904],
E [-0.01138081, -0.04893042, 0.00227406, ..., 0.00839734,
E -0.04513972, 0.00374116]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[-0.01045848, 0.04803827, -0.01155933, ..., -0.04183086,
E 0.04058427, -0.01171237],
E [ 0.04373075, 0.00500919, 0.04929531, ..., -0.03939838,
E 0.00203785, -0.00989903],
E [ 0.04189702, -0.03120966, -0.00791519, ..., 0.00858233,
E 0.01543109, 0.01549872],
E ...,
E [ 0.00184605, 0.0434628 , -0.02273744, ..., 0.04810428,
E -0.02350572, -0.00790094],
E [ 0.04012373, -0.03322871, 0.0497464 , ..., -0.01007406,
E -0.00045862, 0.00152151],
E [-0.02862725, -0.01967434, -0.00228672, ..., 0.00557036,
E -0.03056601, -0.03539287]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.USER: 'user'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.00854038, -0.00232061, 0.00140734, ..., 0.04226798,
E -0.03381544, -0.02190081],
E [ 0.03141161, 0.01365862, -0.01205138, ..., 0.0265334 ,
E -0.03197428, 0.03824944],
E [-0.03385307, -0.0046339 , 0.01335522, ..., 0.04823765,
E 0.04733351, 0.02277596],
E ...,
E [-0.04601424, 0.01994986, 0.04879292, ..., -0.00574926,
E 0.04242582, -0.00810363],
E [ 0.01967518, -0.03233056, -0.00338495, ..., 0.04127854,
E -0.02612364, 0.03764292],
E [-0.0034511 , 0.04024262, 0.02186977, ..., 0.04592418,
E -0.00961372, 0.02344227]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (1): Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E ), did you forget to add aggregation to Filter(
E (feature_names): List(
E (0): 'item_id'
E (1): 'item_category'
E (2): 'user_id'
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError
_____________ test_deepfm_model_categ_and_continuous_feats[False] ______________

self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fdcf18da1d0>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: argument of type 'Tags' is not iterable

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

self = SequentialBlock(
(layers): List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): '...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): Filter(
(feature_names): List(
(0): 'item_id'
(1): 'item_category'
(2): 'user_id'
... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 1]), 'item_id': TensorShape([50, 1]), 'user_id': TensorShape([50, 1])}

def build(self, input_shape=None):
    """Builds the sequential block

    Parameters
    ----------
    input_shape : tf.TensorShape, optional
        The input shape, by default None
    """
    self._maybe_propagate_context(input_shape)
  build_sequentially(self, self.layers, input_shape)

merlin/models/tf/core/combinators.py:129:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
            layer.build(input_shape)
        except TypeError:
            t, v, tb = sys.exc_info()
            if isinstance(input_shape, dict) and isinstance(last_layer, TabularBlock):
                v = TypeError(
                    f"Couldn't build {layer}, "
                    f"did you forget to add aggregation to {last_layer}?"
                )
          six.reraise(t, v, tb)

merlin/models/tf/core/combinators.py:780:


tp = <class 'TypeError'>, value = None, tb = None

def reraise(tp, value, tb=None):
    try:
        if value is None:
            value = tp()
        if value.__traceback__ is not tb:
          raise value.with_traceback(tb)

../../../.local/lib/python3.8/site-packages/six.py:702:


self = SequentialBlock(
(layers): List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): Paral...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
layers = List(
(0): ParallelBlock(
(parallel_layers): Dict(
(categorical): ParallelBlock(
(parallel_layers)... (item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build_sequentially(self, layers, input_shape):
    """Build layers sequentially."""
    last_layer = None
    for layer in layers:
        try:
          layer.build(input_shape)

merlin/models/tf/core/combinators.py:772:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shapes = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def build(self, input_shapes):
    super().build(input_shapes)
    output_shapes = input_shapes
    if self.pre:
        self.pre.build(input_shapes)
        output_shapes = self.pre.compute_output_shape(input_shapes)
  output_shapes = self.compute_call_output_shape(output_shapes)

merlin/models/tf/core/tabular.py:313:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
input_shape = {'item_category': TensorShape([50, 16]), 'item_id': TensorShape([50, 16]), 'user_id': TensorShape([50, 16])}

def compute_call_output_shape(self, input_shape):
    if self.add_to_context:
        return {}
  outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


.0 = <dict_itemiterator object at 0x7fdcf18da1d0>

outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}

merlin/models/tf/core/tabular.py:582:


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E ), did you forget to add aggregation to ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.00095159, -0.00780176, -0.01070255, ..., -0.0245008 ,
E -0.00115737, -0.02886876],
E [-0.012412 , 0.0399057 , 0.04028242, ..., -0.04560817,
E 0.00926073, -0.00812765],
E [-0.0196968 , 0.025419 , 0.00797405, ..., 0.04023785,
E -0.0137983 , -0.03792955],
E ...,
E [ 0.00676475, 0.00952966, -0.01445646, ..., -0.00510135,
E -0.04126855, -0.04733052],
E [-0.02780222, -0.03965948, 0.04582984, ..., -0.02401545,
E -0.04722666, 0.02736635],
E [-0.02108997, 0.00042298, 0.04603405, ..., -0.03562424,
E 0.01985692, 0.0199568 ]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[ 0.03682753, -0.00775306, 0.00989783, ..., 0.04243133,
E 0.02013233, 0.0142022 ],
E [ 0.03489197, -0.00297128, -0.02943789, ..., 0.02520556,
E 0.04500607, -0.03690022],
E [-0.00748165, -0.01145886, 0.02205738, ..., 0.026886 ,
E -0.00793109, 0.03001534],
E ...,
E [ 0.03647201, -0.02389548, 0.03249105, ..., -0.04324914,
E 0.02238042, -0.01105363],
E [-0.0121203 , -0.02577748, -0.03620006, ..., 0.01091584,
E -0.0377016 , 0.04898537],
E [-0.03664935, 0.01268804, 0.03041216, ..., -0.04956597,
E 0.00527128, -0.00412567]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.USER: 'user'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.03058578, -0.00037177, -0.04440222, ..., -0.0143258 ,
E 0.0078275 , -0.02509269],
E [ 0.00060018, 0.02608771, 0.04238523, ..., 0.01086897,
E 0.02085884, 0.01716653],
E [-0.02398338, -0.0076279 , -0.0233204 , ..., -0.02491845,
E -0.01284405, 0.04893791],
E ...,
E [ 0.00118636, -0.02557309, 0.04411072, ..., 0.01656295,
E -0.00796812, -0.03289662],
E [ 0.02188356, -0.01895338, -0.01765849, ..., -0.038915 ,
E -0.02888047, 0.01209231],
E [ 0.01988271, 0.04559529, -0.0447374 , ..., 0.0016938 ,
E -0.01167984, -0.00597036]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError

During handling of the above exception, another exception occurred:

music_streaming_data = <merlin.io.dataset.Dataset object at 0x7fdce9ce9580>
run_eagerly = False

@pytest.mark.parametrize("run_eagerly", [True, False])
def test_deepfm_model_categ_and_continuous_feats(music_streaming_data, run_eagerly):
    music_streaming_data.schema = music_streaming_data.schema.select_by_name(
        ["item_id", "item_category", "user_id", "user_age", "click"]
    )
    model = ml.DeepFMModel(
        music_streaming_data.schema,
        embedding_dim=16,
        deep_block=ml.MLPBlock([16]),
        prediction_tasks=ml.BinaryClassificationTask("click"),
    )
  testing_utils.model_test(model, music_streaming_data, run_eagerly=run_eagerly)

tests/unit/tf/models/test_ranking.py:186:


merlin/models/tf/utils/testing_utils.py:89: in model_test
losses = model.fit(dataset, batch_size=50, epochs=epochs, steps_per_epoch=1)
merlin/models/tf/models/base.py:717: in fit
return super().fit(**fit_kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:1358: in fit
data_handler = data_adapter.get_data_handler(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1401: in get_data_handler
return DataHandler(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:1151: in init
self._adapter = adapter_cls(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:926: in init
super(KerasSequenceAdapter, self).init(
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:808: in init
model.distribute_strategy.run(
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:1312: in run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:2888: in call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/distribute/distribute_lib.py:3689: in _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:595: in wrapper
return func(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/data_adapter.py:809: in
lambda x: model(x, training=False), args=(concrete_x,))
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/training.py:490: in call
return super().call(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py:60: in error_handler
return fn(*args, **kwargs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:1007: in call
self._maybe_build(inputs)
merlin/models/tf/models/base.py:867: in _maybe_build
super()._maybe_build(inputs)
/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py:2759: in _maybe_build
self.build(input_shapes) # pylint:disable=not-callable
merlin/models/tf/models/base.py:893: in build
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:703: in reraise
raise value
merlin/models/tf/models/base.py:885: in build
layer.build(input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:521: in build
layer.build(layer_input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/combinators.py:129: in build
build_sequentially(self, self.layers, input_shape)
merlin/models/tf/core/combinators.py:780: in build_sequentially
six.reraise(t, v, tb)
../../../.local/lib/python3.8/site-packages/six.py:702: in reraise
raise value.with_traceback(tb)
merlin/models/tf/core/combinators.py:772: in build_sequentially
layer.build(input_shape)
merlin/models/tf/core/tabular.py:313: in build
output_shapes = self.compute_call_output_shape(output_shapes)
merlin/models/tf/core/tabular.py:582: in compute_call_output_shape
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}
merlin/models/tf/core/tabular.py:582: in
outputs = {k: v for k, v in input_shape.items() if self.check_feature(k)}


self = Filter(
(_feature_shapes): Dict(
(item_id): TensorShape([50, 1])
(item_category): TensorShape([50, 1])
(...es): Dict(
(item_id): tf.int64
(item_category): tf.int64
(user_id): tf.int64
(user_age): tf.int64
)
)
feature_name = 'item_id'

def check_feature(self, feature_name) -> bool:
    if self.exclude:
        return feature_name not in self.feature_names
  return feature_name in self.feature_names

E TypeError: Couldn't build SequentialBlock(
E (layers): List(
E (0): ParallelBlock(
E (parallel_layers): Dict(
E (categorical): ParallelBlock(
E (parallel_layers): Dict(
E (item_id): EmbeddingTable(
E (features): Dict(
E (item_id): ColumnSchema(name='item_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>, <Tags.ITEM_ID: 'item_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[ 0.00095159, -0.00780176, -0.01070255, ..., -0.0245008 ,
E -0.00115737, -0.02886876],
E [-0.012412 , 0.0399057 , 0.04028242, ..., -0.04560817,
E 0.00926073, -0.00812765],
E [-0.0196968 , 0.025419 , 0.00797405, ..., 0.04023785,
E -0.0137983 , -0.03792955],
E ...,
E [ 0.00676475, 0.00952966, -0.01445646, ..., -0.00510135,
E -0.04126855, -0.04733052],
E [-0.02780222, -0.03965948, 0.04582984, ..., -0.02401545,
E -0.04722666, 0.02736635],
E [-0.02108997, 0.00042298, 0.04603405, ..., -0.03562424,
E 0.01985692, 0.0199568 ]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (item_category): EmbeddingTable(
E (features): Dict(
E (item_category): ColumnSchema(name='item_category', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.ITEM: 'item'>}, properties={'domain': {'min': 0, 'max': 100}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(101, 16) dtype=float32, numpy=
E array([[ 0.03682753, -0.00775306, 0.00989783, ..., 0.04243133,
E 0.02013233, 0.0142022 ],
E [ 0.03489197, -0.00297128, -0.02943789, ..., 0.02520556,
E 0.04500607, -0.03690022],
E [-0.00748165, -0.01145886, 0.02205738, ..., 0.026886 ,
E -0.00793109, 0.03001534],
E ...,
E [ 0.03647201, -0.02389548, 0.03249105, ..., -0.04324914,
E 0.02238042, -0.01105363],
E [-0.0121203 , -0.02577748, -0.03620006, ..., 0.01091584,
E -0.0377016 , 0.04898537],
E [-0.03664935, 0.01268804, 0.03041216, ..., -0.04956597,
E 0.00527128, -0.00412567]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (user_id): EmbeddingTable(
E (features): Dict(
E (user_id): ColumnSchema(name='user_id', tags={<Tags.CATEGORICAL: 'categorical'>, <Tags.USER: 'user'>, <Tags.USER_ID: 'user_id'>, <Tags.ID: 'id'>}, properties={'domain': {'min': 0, 'max': 10000}}, dtype=dtype('int64'), is_list=False, is_ragged=False)
E )
E (table): Embedding(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E (embeddings): <tf.Variable 'model/embeddings:0' shape=(10001, 16) dtype=float32, numpy=
E array([[-0.03058578, -0.00037177, -0.04440222, ..., -0.0143258 ,
E 0.0078275 , -0.02509269],
E [ 0.00060018, 0.02608771, 0.04238523, ..., 0.01086897,
E 0.02085884, 0.01716653],
E [-0.02398338, -0.0076279 , -0.0233204 , ..., -0.02491845,
E -0.01284405, 0.04893791],
E ...,
E [ 0.00118636, -0.02557309, 0.04411072, ..., 0.01656295,
E -0.00796812, -0.03289662],
E [ 0.02188356, -0.01895338, -0.01765849, ..., -0.038915 ,
E -0.02888047, 0.01209231],
E [ 0.01988271, 0.04559529, -0.0447374 , ..., 0.0016938 ,
E -0.01167984, -0.00597036]], dtype=float32)>
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E (1): Filter(
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E ), did you forget to add aggregation to Filter(
E (feature_names): List(
E (0): 'item_id'
E (1): 'item_category'
E (2): 'user_id'
E )
E (_feature_shapes): Dict(
E (item_id): TensorShape([50, 1])
E (item_category): TensorShape([50, 1])
E (user_id): TensorShape([50, 1])
E (user_age): TensorShape([50, 1])
E )
E (_feature_dtypes): Dict(
E (item_id): tf.int64
E (item_category): tf.int64
E (user_id): tf.int64
E (user_age): tf.int64
E )
E )?

merlin/models/tf/core/tabular.py:590: TypeError
=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 9 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 9 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/outputs/test_contrastive.py: 2 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_file3wsfxz8d.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:566: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
===== 4 failed, 680 passed, 11 skipped, 1045 warnings in 971.87s (0:16:11) =====
Build step 'Execute shell' marked build as failure
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins3287177093624312041.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit f75d96fefea201605f6b4cf6afeac4de9468e9df, no merge conflicts.
Running as SYSTEM
Setting status of f75d96fefea201605f6b4cf6afeac4de9468e9df to PENDING with url https://10.20.13.93:8080/job/merlin_models/1308/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse f75d96fefea201605f6b4cf6afeac4de9468e9df^{commit} # timeout=10
Checking out Revision f75d96fefea201605f6b4cf6afeac4de9468e9df (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f75d96fefea201605f6b4cf6afeac4de9468e9df # timeout=10
Commit message: "Fixed failing test"
 > git rev-list --no-walk db03880e65b5ddd7714c516cb06d6e797fae062c # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins14711923962040376517.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 695 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py ... [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 26%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 30%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/examples/test_01_getting_started.py . [ 31%]
tests/unit/tf/examples/test_02_dataschema.py . [ 31%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 31%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 32%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 32%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 32%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 32%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 32%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 32%]
tests/unit/tf/inputs/test_continuous.py ..... [ 33%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 38%]
..... [ 38%]
tests/unit/tf/inputs/test_tabular.py .................. [ 41%]
tests/unit/tf/layers/test_queue.py .............. [ 43%]
tests/unit/tf/losses/test_losses.py ....................... [ 46%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 47%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 50%]
tests/unit/tf/models/test_base.py s................. [ 53%]
tests/unit/tf/models/test_benchmark.py .. [ 53%]
tests/unit/tf/models/test_ranking.py .................................. [ 58%]
tests/unit/tf/models/test_retrieval.py ................................ [ 63%]
tests/unit/tf/outputs/test_base.py ..... [ 64%]
tests/unit/tf/outputs/test_classification.py ...... [ 64%]
tests/unit/tf/outputs/test_contrastive.py ......... [ 66%]
tests/unit/tf/outputs/test_regression.py .. [ 66%]
tests/unit/tf/outputs/test_sampling.py .... [ 67%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 67%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 69%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 70%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 71%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 71%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 72%]
tests/unit/tf/transforms/test_bias.py .. [ 72%]
tests/unit/tf/transforms/test_features.py s............................. [ 76%]
................. [ 79%]
tests/unit/tf/transforms/test_negative_sampling.py .......... [ 80%]
tests/unit/tf/transforms/test_noise.py ..... [ 81%]
tests/unit/tf/transforms/test_tensor.py .. [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 82%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 84%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 85%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 9 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 9 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/outputs/test_contrastive.py: 2 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_fileul8i1u_t.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:566: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========== 684 passed, 11 skipped, 1047 warnings in 976.12s (0:16:16) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins2058080911259813257.sh

@gabrielspmoreira
Copy link
Member Author

@marcromeyn I have rebased this PR and did the necessary changes so that it uses your updated InputBlockV2

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit 3cdb11def0615cbdfff096c01e428c9f27c7ce38, no merge conflicts.
Running as SYSTEM
Setting status of 3cdb11def0615cbdfff096c01e428c9f27c7ce38 to PENDING with url https://10.20.13.93:8080/job/merlin_models/1310/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse 3cdb11def0615cbdfff096c01e428c9f27c7ce38^{commit} # timeout=10
Checking out Revision 3cdb11def0615cbdfff096c01e428c9f27c7ce38 (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 3cdb11def0615cbdfff096c01e428c9f27c7ce38 # timeout=10
Commit message: "Merge branch 'main' into ranking_models_inputs"
 > git rev-list --no-walk 826bc3a35aac38f47e917f78087d3d15091f8f6a # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins15098126964348223522.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 695 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py ... [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 26%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 30%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/examples/test_01_getting_started.py . [ 31%]
tests/unit/tf/examples/test_02_dataschema.py . [ 31%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 31%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 32%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 32%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 32%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 32%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 32%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 32%]
tests/unit/tf/inputs/test_continuous.py ..... [ 33%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 38%]
..... [ 38%]
tests/unit/tf/inputs/test_tabular.py .................. [ 41%]
tests/unit/tf/layers/test_queue.py .............. [ 43%]
tests/unit/tf/losses/test_losses.py ....................... [ 46%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 47%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 50%]
tests/unit/tf/models/test_base.py s................. [ 53%]
tests/unit/tf/models/test_benchmark.py .. [ 53%]
tests/unit/tf/models/test_ranking.py .................................. [ 58%]
tests/unit/tf/models/test_retrieval.py ................................ [ 63%]
tests/unit/tf/outputs/test_base.py ..... [ 64%]
tests/unit/tf/outputs/test_classification.py ...... [ 64%]
tests/unit/tf/outputs/test_contrastive.py ......... [ 66%]
tests/unit/tf/outputs/test_regression.py .. [ 66%]
tests/unit/tf/outputs/test_sampling.py .... [ 67%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 67%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 69%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 70%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 71%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 71%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 72%]
tests/unit/tf/transforms/test_bias.py .. [ 72%]
tests/unit/tf/transforms/test_features.py s............................. [ 76%]
................. [ 79%]
tests/unit/tf/transforms/test_negative_sampling.py .......... [ 80%]
tests/unit/tf/transforms/test_noise.py ..... [ 81%]
tests/unit/tf/transforms/test_tensor.py .. [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 82%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 84%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 85%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 9 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 9 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/outputs/test_contrastive.py: 2 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_filelplad2f0.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:566: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========== 684 passed, 11 skipped, 1047 warnings in 977.66s (0:16:17) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins15965917917765665375.sh

@nvidia-merlin-bot
Copy link

Click to view CI Results
GitHub pull request #717 of commit d6df124e55bf66a73f5476d63b42d4efb25bb2fb, no merge conflicts.
Running as SYSTEM
Setting status of d6df124e55bf66a73f5476d63b42d4efb25bb2fb to PENDING with url https://10.20.13.93:8080/job/merlin_models/1313/console and message: 'Pending'
Using context: Jenkins
Building on master in workspace /var/jenkins_home/workspace/merlin_models
using credential nvidia-merlin-bot
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/NVIDIA-Merlin/models/ # timeout=10
Fetching upstream changes from https://github.com/NVIDIA-Merlin/models/
 > git --version # timeout=10
using GIT_ASKPASS to set credentials This is the bot credentials for our CI/CD
 > git fetch --tags --force --progress -- https://github.com/NVIDIA-Merlin/models/ +refs/pull/717/*:refs/remotes/origin/pr/717/* # timeout=10
 > git rev-parse d6df124e55bf66a73f5476d63b42d4efb25bb2fb^{commit} # timeout=10
Checking out Revision d6df124e55bf66a73f5476d63b42d4efb25bb2fb (detached)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f d6df124e55bf66a73f5476d63b42d4efb25bb2fb # timeout=10
Commit message: "Making DeepFM output layer configurable"
 > git rev-list --no-walk 6e9de9672319b896089c94ce3a094a9c7edf8cdc # timeout=10
[merlin_models] $ /bin/bash /tmp/jenkins13169389965715311271.sh
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: testbook in /usr/local/lib/python3.8/dist-packages (0.4.2)
Requirement already satisfied: nbformat>=5.0.4 in /usr/local/lib/python3.8/dist-packages (from testbook) (5.5.0)
Requirement already satisfied: nbclient>=0.4.0 in /usr/local/lib/python3.8/dist-packages (from testbook) (0.6.8)
Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (2.16.1)
Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.16.0)
Requirement already satisfied: jupyter_core in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (4.11.1)
Requirement already satisfied: traitlets>=5.1 in /usr/local/lib/python3.8/dist-packages (from nbformat>=5.0.4->testbook) (5.4.0)
Requirement already satisfied: jupyter-client>=6.1.5 in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (7.3.5)
Requirement already satisfied: nest-asyncio in /usr/local/lib/python3.8/dist-packages (from nbclient>=0.4.0->testbook) (1.5.5)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (22.1.0)
Requirement already satisfied: importlib-resources>=1.4.0; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (5.9.0)
Requirement already satisfied: pkgutil-resolve-name>=1.3.10; python_version < "3.9" in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (1.3.10)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.8/dist-packages (from jsonschema>=2.6->nbformat>=5.0.4->testbook) (0.18.1)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (0.4)
Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (2.8.2)
Requirement already satisfied: pyzmq>=23.0 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (24.0.0)
Requirement already satisfied: tornado>=6.2 in /usr/local/lib/python3.8/dist-packages (from jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (6.2)
Requirement already satisfied: zipp>=3.1.0; python_version < "3.10" in /usr/local/lib/python3.8/dist-packages (from importlib-resources>=1.4.0; python_version < "3.9"->jsonschema>=2.6->nbformat>=5.0.4->testbook) (3.8.1)
Requirement already satisfied: six>=1.5 in /var/jenkins_home/.local/lib/python3.8/site-packages (from python-dateutil>=2.8.2->jupyter-client>=6.1.5->nbclient>=0.4.0->testbook) (1.15.0)
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/jenkins_home/workspace/merlin_models/models, configfile: pyproject.toml
plugins: anyio-3.6.1, xdist-2.5.0, forked-1.4.0, cov-3.0.0
collected 695 items

tests/unit/config/test_schema.py .... [ 0%]
tests/unit/datasets/test_advertising.py .s [ 0%]
tests/unit/datasets/test_ecommerce.py ..sss [ 1%]
tests/unit/datasets/test_entertainment.py ....sss. [ 2%]
tests/unit/datasets/test_social.py . [ 2%]
tests/unit/datasets/test_synthetic.py ...... [ 3%]
tests/unit/implicit/test_implicit.py . [ 3%]
tests/unit/lightfm/test_lightfm.py . [ 4%]
tests/unit/tf/test_core.py ...... [ 4%]
tests/unit/tf/test_dataset.py ................ [ 7%]
tests/unit/tf/test_public_api.py . [ 7%]
tests/unit/tf/blocks/test_cross.py ........... [ 8%]
tests/unit/tf/blocks/test_dlrm.py .......... [ 10%]
tests/unit/tf/blocks/test_interactions.py ... [ 10%]
tests/unit/tf/blocks/test_mlp.py ................................. [ 15%]
tests/unit/tf/blocks/test_optimizer.py s................................ [ 20%]
..................... [ 23%]
tests/unit/tf/blocks/retrieval/test_base.py . [ 23%]
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py .. [ 23%]
tests/unit/tf/blocks/retrieval/test_two_tower.py ........... [ 25%]
tests/unit/tf/blocks/sampling/test_cross_batch.py . [ 25%]
tests/unit/tf/blocks/sampling/test_in_batch.py . [ 25%]
tests/unit/tf/core/test_aggregation.py ......... [ 26%]
tests/unit/tf/core/test_base.py .. [ 27%]
tests/unit/tf/core/test_combinators.py s................... [ 30%]
tests/unit/tf/core/test_encoder.py . [ 30%]
tests/unit/tf/core/test_index.py ... [ 30%]
tests/unit/tf/core/test_prediction.py .. [ 30%]
tests/unit/tf/core/test_tabular.py .... [ 31%]
tests/unit/tf/examples/test_01_getting_started.py . [ 31%]
tests/unit/tf/examples/test_02_dataschema.py . [ 31%]
tests/unit/tf/examples/test_03_exploring_different_models.py . [ 31%]
tests/unit/tf/examples/test_04_export_ranking_models.py . [ 32%]
tests/unit/tf/examples/test_05_export_retrieval_model.py . [ 32%]
tests/unit/tf/examples/test_06_advanced_own_architecture.py . [ 32%]
tests/unit/tf/examples/test_07_train_traditional_models.py . [ 32%]
tests/unit/tf/examples/test_usecase_ecommerce_session_based.py . [ 32%]
tests/unit/tf/examples/test_usecase_pretrained_embeddings.py . [ 32%]
tests/unit/tf/inputs/test_continuous.py ..... [ 33%]
tests/unit/tf/inputs/test_embedding.py ................................. [ 38%]
..... [ 38%]
tests/unit/tf/inputs/test_tabular.py .................. [ 41%]
tests/unit/tf/layers/test_queue.py .............. [ 43%]
tests/unit/tf/losses/test_losses.py ....................... [ 46%]
tests/unit/tf/metrics/test_metrics_popularity.py ..... [ 47%]
tests/unit/tf/metrics/test_metrics_topk.py ....................... [ 50%]
tests/unit/tf/models/test_base.py s................. [ 53%]
tests/unit/tf/models/test_benchmark.py .. [ 53%]
tests/unit/tf/models/test_ranking.py .................................. [ 58%]
tests/unit/tf/models/test_retrieval.py ................................ [ 63%]
tests/unit/tf/outputs/test_base.py ..... [ 64%]
tests/unit/tf/outputs/test_classification.py ...... [ 64%]
tests/unit/tf/outputs/test_contrastive.py ......... [ 66%]
tests/unit/tf/outputs/test_regression.py .. [ 66%]
tests/unit/tf/outputs/test_sampling.py .... [ 67%]
tests/unit/tf/prediction_tasks/test_classification.py .. [ 67%]
tests/unit/tf/prediction_tasks/test_multi_task.py ................ [ 69%]
tests/unit/tf/prediction_tasks/test_next_item.py ..... [ 70%]
tests/unit/tf/prediction_tasks/test_regression.py ..... [ 71%]
tests/unit/tf/prediction_tasks/test_retrieval.py . [ 71%]
tests/unit/tf/prediction_tasks/test_sampling.py ...... [ 72%]
tests/unit/tf/transforms/test_bias.py .. [ 72%]
tests/unit/tf/transforms/test_features.py s............................. [ 76%]
................. [ 79%]
tests/unit/tf/transforms/test_negative_sampling.py .......... [ 80%]
tests/unit/tf/transforms/test_noise.py ..... [ 81%]
tests/unit/tf/transforms/test_tensor.py .. [ 81%]
tests/unit/tf/utils/test_batch.py .... [ 82%]
tests/unit/tf/utils/test_tf_utils.py ..... [ 82%]
tests/unit/torch/test_dataset.py ......... [ 84%]
tests/unit/torch/test_public_api.py . [ 84%]
tests/unit/torch/block/test_base.py .... [ 84%]
tests/unit/torch/block/test_mlp.py . [ 85%]
tests/unit/torch/features/test_continuous.py .. [ 85%]
tests/unit/torch/features/test_embedding.py .............. [ 87%]
tests/unit/torch/features/test_tabular.py .... [ 87%]
tests/unit/torch/model/test_head.py ............ [ 89%]
tests/unit/torch/model/test_model.py .. [ 89%]
tests/unit/torch/tabular/test_aggregation.py ........ [ 91%]
tests/unit/torch/tabular/test_tabular.py ... [ 91%]
tests/unit/torch/tabular/test_transformations.py ....... [ 92%]
tests/unit/utils/test_schema_utils.py ................................ [ 97%]
tests/unit/xgb/test_xgboost.py .................... [100%]

=============================== warnings summary ===============================
../../../../../usr/lib/python3/dist-packages/requests/init.py:89
/usr/lib/python3/dist-packages/requests/init.py:89: RequestsDependencyWarning: urllib3 (1.26.12) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:36: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
'nearest': pil_image.NEAREST,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:37: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
'bilinear': pil_image.BILINEAR,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
'bicubic': pil_image.BICUBIC,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:39: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
'hamming': pil_image.HAMMING,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:40: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
'box': pil_image.BOX,

../../../../../usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41
/usr/local/lib/python3.8/dist-packages/keras/utils/image_utils.py:41: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
'lanczos': pil_image.LANCZOS,

tests/unit/datasets/test_advertising.py: 1 warning
tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 6 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 8 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 38 warnings
tests/unit/tf/models/test_retrieval.py: 60 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 9 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/prediction_tasks/test_retrieval.py: 1 warning
tests/unit/tf/transforms/test_bias.py: 2 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/transforms/test_noise.py: 1 warning
tests/unit/tf/utils/test_batch.py: 9 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 3 warnings
tests/unit/xgb/test_xgboost.py: 18 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.ITEM_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.ITEM: 'item'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_ecommerce.py: 2 warnings
tests/unit/datasets/test_entertainment.py: 4 warnings
tests/unit/datasets/test_social.py: 1 warning
tests/unit/datasets/test_synthetic.py: 5 warnings
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_core.py: 6 warnings
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/test_cross.py: 5 warnings
tests/unit/tf/blocks/test_dlrm.py: 9 warnings
tests/unit/tf/blocks/test_interactions.py: 2 warnings
tests/unit/tf/blocks/test_mlp.py: 26 warnings
tests/unit/tf/blocks/test_optimizer.py: 30 warnings
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 10 warnings
tests/unit/tf/core/test_aggregation.py: 6 warnings
tests/unit/tf/core/test_base.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 2 warnings
tests/unit/tf/core/test_index.py: 3 warnings
tests/unit/tf/core/test_prediction.py: 2 warnings
tests/unit/tf/inputs/test_continuous.py: 4 warnings
tests/unit/tf/inputs/test_embedding.py: 19 warnings
tests/unit/tf/inputs/test_tabular.py: 18 warnings
tests/unit/tf/models/test_base.py: 17 warnings
tests/unit/tf/models/test_benchmark.py: 2 warnings
tests/unit/tf/models/test_ranking.py: 36 warnings
tests/unit/tf/models/test_retrieval.py: 32 warnings
tests/unit/tf/outputs/test_base.py: 5 warnings
tests/unit/tf/outputs/test_classification.py: 6 warnings
tests/unit/tf/outputs/test_contrastive.py: 9 warnings
tests/unit/tf/outputs/test_regression.py: 2 warnings
tests/unit/tf/prediction_tasks/test_classification.py: 2 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 5 warnings
tests/unit/tf/transforms/test_features.py: 10 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 10 warnings
tests/unit/tf/utils/test_batch.py: 7 warnings
tests/unit/torch/block/test_base.py: 4 warnings
tests/unit/torch/block/test_mlp.py: 1 warning
tests/unit/torch/features/test_continuous.py: 1 warning
tests/unit/torch/features/test_embedding.py: 4 warnings
tests/unit/torch/features/test_tabular.py: 4 warnings
tests/unit/torch/model/test_head.py: 12 warnings
tests/unit/torch/model/test_model.py: 2 warnings
tests/unit/torch/tabular/test_aggregation.py: 6 warnings
tests/unit/torch/tabular/test_transformations.py: 2 warnings
tests/unit/xgb/test_xgboost.py: 17 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.USER_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.USER: 'user'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/datasets/test_entertainment.py: 1 warning
tests/unit/implicit/test_implicit.py: 1 warning
tests/unit/lightfm/test_lightfm.py: 1 warning
tests/unit/tf/test_dataset.py: 1 warning
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py: 2 warnings
tests/unit/tf/blocks/retrieval/test_two_tower.py: 2 warnings
tests/unit/tf/core/test_combinators.py: 10 warnings
tests/unit/tf/core/test_encoder.py: 1 warning
tests/unit/tf/core/test_prediction.py: 1 warning
tests/unit/tf/inputs/test_continuous.py: 2 warnings
tests/unit/tf/inputs/test_embedding.py: 9 warnings
tests/unit/tf/inputs/test_tabular.py: 8 warnings
tests/unit/tf/models/test_ranking.py: 20 warnings
tests/unit/tf/models/test_retrieval.py: 4 warnings
tests/unit/tf/prediction_tasks/test_multi_task.py: 16 warnings
tests/unit/tf/prediction_tasks/test_regression.py: 3 warnings
tests/unit/tf/transforms/test_negative_sampling.py: 9 warnings
tests/unit/xgb/test_xgboost.py: 12 warnings
/usr/local/lib/python3.8/dist-packages/merlin/schema/tags.py:148: UserWarning: Compound tags like Tags.SESSION_ID have been deprecated and will be removed in a future version. Please use the atomic versions of these tags, like [<Tags.SESSION: 'session'>, <Tags.ID: 'id'>].
warnings.warn(

tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_matrix_factorization.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/blocks/retrieval/test_two_tower.py::test_matrix_factorization_embedding_export
tests/unit/tf/inputs/test_embedding.py::test_embedding_features_exporting_and_loading_pretrained_initializer
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/inputs/embedding.py:879: DeprecationWarning: This function is deprecated in favor of cupy.from_dlpack
embeddings_cupy = cupy.fromDlpack(to_dlpack(tf.convert_to_tensor(embeddings)))

tests/unit/tf/core/test_index.py: 4 warnings
tests/unit/tf/models/test_retrieval.py: 54 warnings
tests/unit/tf/outputs/test_contrastive.py: 2 warnings
tests/unit/tf/prediction_tasks/test_next_item.py: 3 warnings
tests/unit/tf/utils/test_batch.py: 2 warnings
/tmp/autograph_generated_fileecgyyjtz.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag
.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)

tests/unit/tf/models/test_base.py::test_model_pre_post[True]
tests/unit/tf/models/test_base.py::test_model_pre_post[False]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.1]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.3]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.5]
tests/unit/tf/transforms/test_noise.py::test_stochastic_swap_noise[0.7]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/util/dispatch.py:1082: UserWarning: tf.keras.backend.random_binomial is deprecated, and will be removed in a future version.Please use tf.keras.backend.random_bernoulli instead.
return dispatch_target(*args, **kwargs)

tests/unit/tf/models/test_base.py::test_freeze_parallel_block[True]
tests/unit/tf/models/test_base.py::test_freeze_sequential_block
tests/unit/tf/models/test_base.py::test_freeze_unfreeze
tests/unit/tf/models/test_base.py::test_unfreeze_all_blocks
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/gradient_descent.py:108: UserWarning: The lr argument is deprecated, use learning_rate instead.
super(SGD, self).init(name, **kwargs)

tests/unit/tf/models/test_ranking.py::test_deepfm_model_only_categ_feats[False]
tests/unit/tf/models/test_ranking.py::test_deepfm_model_categ_and_continuous_feats[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_3/parallel_block_2/sequential_block_3/sequential_block_2/private__dense_1/dense_1/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_categorical_one_hot[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_model_hashed_cross[False]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_2/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[True]
tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/var/jenkins_home/workspace/merlin_models/models/merlin/models/tf/transforms/features.py:566: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_embedding_custom_inputblock[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:371: UserWarning: Please make sure input features to be categorical, detect user_age has no categorical tag
return py_builtins.overload_of(f)(*args)

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_onehot_multihot_feature_interaction[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_5/sequential_block_9/sequential_block_8/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/tf/models/test_ranking.py::test_wide_deep_model_wide_feature_interaction_multi_optimizer[False]
/usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape_1:0", shape=(None,), dtype=int32), values=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Reshape:0", shape=(None, 1), dtype=float32), dense_shape=Tensor("gradient_tape/model/parallel_block_4/sequential_block_6/sequential_block_5/private__dense_3/dense_3/embedding_lookup_sparse/Cast:0", shape=(2,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(

tests/unit/torch/block/test_mlp.py::test_mlp_block
/var/jenkins_home/workspace/merlin_models/models/tests/unit/torch/_conftest.py:151: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:201.)
return {key: torch.tensor(value) for key, value in data.items()}

tests/unit/xgb/test_xgboost.py::test_without_dask_client
tests/unit/xgb/test_xgboost.py::TestXGBoost::test_music_regression
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs0-DaskDeviceQuantileDMatrix]
tests/unit/xgb/test_xgboost.py::test_gpu_hist_dmatrix[fit_kwargs1-DaskDMatrix]
tests/unit/xgb/test_xgboost.py::TestEvals::test_multiple
tests/unit/xgb/test_xgboost.py::TestEvals::test_default
tests/unit/xgb/test_xgboost.py::TestEvals::test_train_and_valid
tests/unit/xgb/test_xgboost.py::TestEvals::test_invalid_data
/var/jenkins_home/workspace/merlin_models/models/merlin/models/xgb/init.py:335: UserWarning: Ignoring list columns as inputs to XGBoost model: ['item_genres', 'user_genres'].
warnings.warn(f"Ignoring list columns as inputs to XGBoost model: {list_column_names}.")

tests/unit/xgb/test_xgboost.py::TestXGBoost::test_unsupported_objective
/usr/local/lib/python3.8/dist-packages/tornado/ioloop.py:350: DeprecationWarning: make_current is deprecated; start the event loop first
self.make_current()

tests/unit/xgb/test_xgboost.py: 14 warnings
/usr/local/lib/python3.8/dist-packages/xgboost/dask.py:884: RuntimeWarning: coroutine 'Client._wait_for_workers' was never awaited
client.wait_for_workers(n_workers)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.

tests/unit/xgb/test_xgboost.py: 11 warnings
/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1183: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning.
mask = pd.Series(mask)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] tests/unit/datasets/test_advertising.py:20: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:62: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:78: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [1] tests/unit/datasets/test_ecommerce.py:92: ALI-CCP data is not available, pass it through env variable $DATA_PATH_ALICCP
SKIPPED [3] tests/unit/datasets/test_entertainment.py:44: No data-dir available, pass it through env variable $INPUT_DATA_DIR
SKIPPED [4] ../../../../../usr/local/lib/python3.8/dist-packages/tensorflow/python/framework/test_util.py:2746: Not a test.
========== 684 passed, 11 skipped, 1047 warnings in 981.71s (0:16:21) ==========
Performing Post build task...
Match found for : : True
Logical operation result is TRUE
Running script : #!/bin/bash
cd /var/jenkins_home/
CUDA_VISIBLE_DEVICES=1 python test_res_push.py "https://api.GitHub.com/repos/NVIDIA-Merlin/models/issues/$ghprbPullId/comments" "/var/jenkins_home/jobs/$JOB_NAME/builds/$BUILD_NUMBER/log"
[merlin_models] $ /bin/bash /tmp/jenkins17077715439902186611.sh

@gabrielspmoreira gabrielspmoreira merged commit 1cf436f into main Sep 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/api area/ranking chore Maintenance for the repository

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] DeepFM implementation is incorrect and misses tests [Task] InputBlockV2 - Usage in ranking models high-level API

4 participants