Apply a traied model with tensorflow on transformer pipeline pop out error

i’m using this github text summarization and I have a problem. I have been struggling for two week and I could not figure that out.
im using a notebook from this github repository:
GitHub - flogothetis/Abstractive-Summarization-T5-Keras: This abstractive text summarization is one of the most challenging tasks in natural language processing, involving understanding of long passages, information compression, and language generation. The dominant paradigm for training machine learning models to do this is sequence-to-sequence (seq2seq) learning, where a neural network learns to map input sequences to output sequences. While these seq2seq models were initially developed using recurrent neural networks, Transformer encoder-decoder models have recently become favored as they are more effective at modeling the dependencies present in the long sequences encountered in summarization.

notebook link:
Abstractive-Summarization-T5-Keras/AbstractiveSummarizationT5.ipynb at main · flogothetis/Abstractive-Summarization-T5-Keras · GitHub

after train model i wanna use huggingface transformer pipe line to generate summerization from transformers import pipeline

summarizer = pipeline(“summarization”, model=model, tokenizer=“t5-small”, framework=“tf”)
summarizer(“some text”)

but it pop out an error:

AttributeError: ‘Functional’ object has no attribute ‘config’

Anyone has any idea how can i solve it?

full error:

AttributeError Traceback (most recent call last)
/tmp/ipykernel_20/1872405895.py in
----> 1 summarizer = pipeline(“summarization”, model=model, tokenizer=“t5-small”, framework=“tf”)
2
3 summarizer(“The US has passed the peak on new coronavirus cases, President Donald Trump said and predicted that some states would reopen”)

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/init.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, use_auth_token, model_kwargs, **kwargs)
432 break
433
→ 434 return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs)

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/text2text_generation.py in init(self, *args, **kwargs)
37
38 def init(self, *args, **kwargs):
—> 39 super().init(*args, **kwargs)
40
41 self.check_model_type(

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/base.py in init(self, model, tokenizer, modelcard, framework, task, args_parser, device, binary_output)
548
549 # Update config with task specific parameters
→ 550 task_specific_params = self.model.config.task_specific_params
551 if task_specific_params is not None and task in task_specific_params:
552 self.model.config.update(task_specific_params.get(task))

AttributeError: ‘Functional’ object has no attribute ‘config’

Hi @Sara_Taylor,

Sorry for the delay in response.
This AttributeError: ‘Functional’ object has no attribute ‘config’ is due to when the model object you are trying to use with the Hugging Face pipeline is not correctly instantiated or is of the wrong type.

Kindly check out this example instantiating pipeline step for your reference:

from transformers import TFAutoModelForSeq2SeqLM, T5Tokenizer, pipeline

# Load the trained model and tokenizer
model = TFAutoModelForSeq2SeqLM.from_pretrained('path_to_your_model_directory')
tokenizer = T5Tokenizer.from_pretrained('t5-small')

# Create a summarization pipeline
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, framework="tf")

# Generate summary
summary = summarizer("The US has passed the peak on new coronavirus cases...", max_length=50)
print(summary)

Hope this helps.Thank You.