Pinned inference notebook into model exporter.

This commit is contained in:
Mateo Cedillo
2023-06-07 20:47:46 -05:00
parent acd7e04976
commit 218efcb102

View File

@@ -5,7 +5,7 @@
"colab": {
"provenance": [],
"gpuType": "T4",
"authorship_tag": "ABX9TyNHZIC4OUHDX+ElHmpW/AAV",
"authorship_tag": "ABX9TyN4A5uP0tIg9xc15nyLxbO2",
"include_colab_link": true
},
"kernelspec": {
@@ -147,6 +147,25 @@
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"# I want to test this model!\n",
"\n",
"Sure! You can upload your generated package to your friends, to a cloud storage and test it on:\n",
"* [The inference notebook](https://colab.research.google.com/github/rmcpantoja/piper/blob/master/notebooks/piper_inference_(ONNX).ipynb)\n",
" * run the cells in order for it to work correctly, as well as all the notebooks. Also, the inference notebook will guide you through the process using the enhanced accessibility feature if you wish. It's easy to use. Test it!\n",
"* Or through the NVDA screen reader!\n",
" * Download and install the latest version of the [add-on](https://github.com/mush42/piper-nvda/releases).\n",
" * Once the plugin is installed, go to NVDA menu/preferences/settings... and look for the `Piper Voice Manager` category.\n",
" * Tab until you find the `Install from local file` button, press enter and select the generated package in your downloads.\n",
" * Once the package is selected and installed, apply the changes and restart NVDA to update the voice list.\n",
"* Enjoy your creation!"
],
"metadata": {
"id": "IRiNBHkeoDbC"
}
}
]
}