Experts BERT - Collection of BERT experts fine-tuned on different datasets.

pip install vectorhub[encoders-text-tfhub]

Details

Release date: 2021-02-15

Vector length: 768 (Bert Large)

Repo: https://tfhub.dev/google/collections/experts/bert/1

Paper: https://arxiv.org/abs/1810.04805v2

Example

#pip install vectorhub[encoders-text-tfhub]
#FOR WINDOWS: pip install vectorhub[encoders-text-tfhub-windows]
from vectorhub.encoders.text.tfhub import ExpertsBert2Vec
model = ExpertsBert2Vec()
model.encode("I enjoy taking long walks along the beach with my dog.")

Index and search vectors

Index and search your vectors easily on the cloud using 1 line of code!

username = '<your username>'
email = '<your email>'
# You can request an api_key using - type in your username and email.
api_key = model.request_api_key(username, email)

# Index in 1 line of code
items = ['chicken', 'toilet', 'paper', 'enjoy walking']
model.add_documents(user, api_key, items)

# Search in 1 line of code and get the most similar results.
model.search('basin')

# Add metadata to your search
metadata = [{'num_of_letters': 7, 'type': 'animal'}, {'num_of_letters': 6, 'type': 'household_items'}, {'num_of_letters': 5, 'type': 'household_items'}, {'num_of_letters': 12, 'type': 'emotion'}]
model.add_documents(user, api_key, items, metadata=metadata)

Description

Starting from a pre-trained BERT model and fine-tuning on the downstream task gives impressive results on many NLP tasks. One can further increase the performance by starting from a BERT model that better aligns or transfers to the task at hand, particularly when having a low number of downstream examples. For example, one can use a BERT model that was trained on text from a similar domain or by use a BERT model that was trained for a similar task.

This is a collection of such BERT "expert" models that were trained on a diversity of datasets and tasks to improve performance on downstream tasks like question answering, tasks that require natural language inference skills, NLP tasks in the medical text domain, and more.

Bert Image

Working in Colab

If you are using this in colab and want to save this so you don't have to reload, use:

import os 
os.environ['TFHUB_CACHE_DIR'] = "drive/MyDrive/"
os.environ["TFHUB_MODEL_LOAD_FORMAT"] = "COMPRESSED"

Model Versions

Model Table Vector Length
https://tfhub.dev/google/experts/bert/wiki_books/2 768
https://tfhub.dev/google/experts/bert/wiki_books/mnli/2 768
https://tfhub.dev/google/experts/bert/wiki_books/qnli/2 768
https://tfhub.dev/google/experts/bert/wiki_books/qqp/2 768
https://tfhub.dev/google/experts/bert/wiki_books/sst2/2 768
https://tfhub.dev/google/experts/bert/wiki_books/squad2/2 768
https://tfhub.dev/google/experts/bert/pubmed/2 768
https://tfhub.dev/google/experts/bert/pubmed/squad2/2 768

Limitations

  • NA

Training Corpora:

  • BooksCorpus (800M words)
  • Wikipedia (2,500M words)
  • MEDLINE/PubMed
  • CORD-19
  • CoLa
  • MRPC

Other Notes:

  • NA