$conf, $runtime; function_exists('chdir') AND chdir(APP_PATH); $r = 'mysql' == $conf['cache']['type'] ? website_set('runtime', $runtime) : cache_set('runtime', $runtime); } function runtime_truncate() { global $conf; 'mysql' == $conf['cache']['type'] ? website_set('runtime', '') : cache_delete('runtime'); } register_shutdown_function('runtime_save'); ?>pyspark - How to use input_example in MLFlow logged ONNX model in Databricks to make predictions? - Stack Overflow|Programmer puzzle solving
最新消息:Welcome to the puzzle paradise for programmers! Here, a well-designed puzzle awaits you. From code logic puzzles to algorithmic challenges, each level is closely centered on the programmer's expertise and skills. Whether you're a novice programmer or an experienced tech guru, you'll find your own challenges on this site. In the process of solving puzzles, you can not only exercise your thinking skills, but also deepen your understanding and application of programming knowledge. Come to start this puzzle journey full of wisdom and challenges, with many programmers to compete with each other and show your programming wisdom! Translated with DeepL.com (free version)

pyspark - How to use input_example in MLFlow logged ONNX model in Databricks to make predictions? - Stack Overflow

matteradmin15PV0评论

I logged an ONNX model (converted from a pyspark model) in MLFlow like this:

with mlflow.start_run() as run:
    mlflow.onnx.log_model(
        onnx_model=my_onnx_model,
        artifact_path="onnx_model",
        input_example=input_example,
    )

where input_example is a Pandas dataframe that gets saved to artifacts.

On Databricks experiments page, I can see the model being logged along with input_example.json that indeed contains the data I provided as input_example while logging the model.

How to use that data now to make predictions for testing whether ONNX model was logged correctly or not? On model artifacts page in Databricks UI, I see:

from mlflow.models import validate_serving_input

model_uri = 'runs:/<some-model-id>/onnx_model'

# The logged model does not contain an input_example.
# Manually generate a serving payload to verify your model prior to deployment.
from mlflow.models import convert_input_example_to_serving_input

# Define INPUT_EXAMPLE via assignment with your own input example to the model
# A valid input example is a data instance suitable for pyfunc prediction
serving_payload = convert_input_example_to_serving_input(INPUT_EXAMPLE)

# Validate the serving payload works on the model
validate_serving_input(model_uri, serving_payload)
Post a comment

comment list (0)

  1. No comments so far