【Agent应用】营销大师 | 文案创作助手

💕营销大师 | 文案创作助手💕

🔥 解锁无限创意可能!想要轻松驾驭朋友圈、小红书、公众号?渴望一键生成PPT提纲、诗词创作、作文灵感?我们应有尽有!

图片描述

🌟 功能亮点
朋友圈文案🍇:让您的动态瞬间吸睛!
彩虹屁助手🍓:甜言蜜语,手到擒来!
小红书全套装备🍒🍑🌸:笔记、标题、种草文案,一键搞定!
公众号文案🥥:专业风范,粉丝秒变忠实读者!
创意无限🥰:广告标题、创意广告、产品起名,让您的想象力翱翔!
视频与直播神器🦋🕍💗:剧本、口播稿、直播脚本,轻松成为镜头前的焦点!
学术与文艺兼备✨🌷📕:诗词、作文、书评,文化与商业的完美结合!

🎯 立即体验“创意广告🥰”功能 让您的广告文案瞬间充满魅力与创意,吸引无数目光!

💡 不仅仅是文案生成 我们的图像理解技术能让您从图片中汲取灵感,结合Ernie-4.0的强大语言处理能力,让每一个字句都闪闪发光!

 一、 营销大师 | 文案创作助手 - 项目简介

💡 1.1 主要功能

  • 图像理解:通过Clip模型,应用可以分析上传的图片内容,提取关键信息,如主题、色彩、情感等,为后续的文案创作提供灵感和基础。✨

  • 营销文案生成:基于ERNIE Bot和ERNIE Bot Agent的强大自然语言处理能力,应用可以根据用户输入的关键词、目标受众、产品特点等信息,自动生成具有吸引力和营销效果的文案。✨

  • 个性化定制:用户可以根据自己的需求调整文案的风格、长度、重点等,以满足不同场景和平台的营销需求。✨

  • 多语言支持:除了中文,应用还支持英文、日文、法文等多种语言,帮助用户在全球范围内进行营销活动。✨

  • 智能优化建议:应用会根据文案的效果和用户反馈,提供智能的优化建议,帮助用户不断完善和提升文案的质量。✨

💡 1.2 使用场景

  • 社交媒体营销:快速生成适合社交媒体平台的营销文案,提高品牌曝光度和用户互动率。🥰

  • 广告创意制作:为广告设计师提供创意灵感和文案支持,提高广告的点击率和转化率。🥰

  • 电商产品描述:自动生成详细且吸引人的产品描述文案,帮助电商卖家提升产品销量。🥰

  • 活动策划与推广:辅助策划人员撰写活动宣传文案,提高活动的参与度和影响力。🥰

💡 1.3 技术优势

  • 先进的模型架构:采用ERNIE Bot提供的文心一言的能力,确保文案生成的准确性和流畅性。 🌸

  • 高效的计算性能:利用PaddleMIX等优化技术,提高提供图像理解能力,满足用户快速响应的需求。🌸

  • 易于集成与扩展:提供丰富的API接口和文档支持,方便用户将应用集成到自己的系统中,并根据需要进行功能扩展。🌸

 二、 营销大师 | 文案创作助手 - 快速开始

📕 2.1 环境安装

安装【ERNIE Bot】、【ERNIE Bot Agent】以及PaddleMIX中的【ppdiffusers】,详细信息请点击链接进行查看。

In [ ]

!pip install -r requirements.txt --user
# 安装agent核心模块
!pip install --upgrade erniebot-agent --user
# 安装文心agent所有模块
!pip install --upgrade erniebot-agent[all] --user

📕 2.2 ERNIE Bot

ERNIE SDK 仓库包含两个项目:ERNIE Bot Agent 和 ERNIE Bot。ERNIE Bot Agent 是百度飞桨推出的基于文心大模型编排能力的大模型智能体开发框架,结合了飞桨星河社区的丰富预置平台功能。ERNIE Bot 则为开发者提供便捷接口,轻松调用文心大模型的文本创作、通用对话、语义向量及AI作图等基础功能。在此使用ERNIE Bot的对话补全(Chat Completion)功能,直接上代码。

In [1]

import erniebot as eb

def glm_single_QA(api_type, access_token, model, prompt):
    eb.api_type = api_type
    eb.access_token = access_token
    
    # Create a chat completion
    response = eb.ChatCompletion.create(
        model=model,
        messages=[
            {"role": "user",
            "content": prompt
            }
        ]
    )
    return response.get_result()

def generate_prompt(prompt, style):
  if len(prompt) == 0:
    return "请输入您的提示语"
  
  generated_prompt = '以“' + prompt + '”为主题,撰写一段' + style + ',字数在100字左右'
  print('功能:' + style + ',generated_prompt:' + generated_prompt)
  
  return generated_prompt

def text_generation(api_type, access_token, model, original_prompt, generated_prompt, style):
  if not len(generated_prompt) == 0:
    prompt = generated_prompt
  elif not len(original_prompt) == 0:
    prompt = original_prompt
  else:
    return "请输入您的提示语"
  
  print('功能:' + style + ',提示语:' + prompt)
  
  response = glm_single_QA(api_type, access_token, model, prompt)
    
  result = '按照您的提示语:' + prompt + ',生成的文案如下:\n\n' + response

  return result

In [2]

api_type = 'aistudio'
access_token = 'xxxxxxxxxxxxxxxxxx' # 输入自己的AI Studio访问令牌
model = 'ernie-4.0'
original_prompt = 'AI Studio深度学习平台'
generated_prompt = '以“AI Studio深度学习平台”为主题,撰写一段创意广告🥰,字数在100字左右'
style = '创意广告🥰'
text_generation(api_type, access_token, model, original_prompt, generated_prompt, style)
功能:创意广告🥰,提示语:以“AI Studio深度学习平台”为主题,撰写一段创意广告🥰,字数在100字左右
'按照您的提示语:以“AI Studio深度学习平台”为主题,撰写一段创意广告🥰,字数在100字左右,生成的文案如下:\n\n"AI Studio,你的深度学习创意工厂!从这里,开启你的AI之旅,激发无限可能。强大的算法库,海量的数据集,一站式开发环境,让你的灵感翱翔。在AI Studio,每一个创新想法都能轻松实现。立即加入,携手未来,用AI改变世界!"'

🥰 输出结果:'按照您的提示语:以“AI Studio深度学习平台”为主题,撰写一段创意广告🥰,字数在100字左右,生成的文案如下:\n\n"探索未来,启程于AI Studio!我们的深度学习平台助您轻松构建智能应用,释放无限创意。无论您是新手还是专家,AI Studio都为您提供强大的工具和资源,让您的AI梦想照进现实。快来AI Studio,引领智能时代,共创美好未来!"'

📕 2.3 ERNIE Bot Agent

ERNIE Bot Agent 是由百度飞桨全新推出的大模型智能体(agent)开发框架。基于文心大模型强大的编排能力,并结合飞桨星河社区提供的丰富预置平台化功能,ERNIE Bot Agent 旨在成为功能全面且高度可定制的一站式大模型智能体和应用开发框架。

🍑 2.3.1 编排能力

ERNIE Bot Agent 基于文心大模型的 Function Calling 能力实现了多工具编排和自动调度功能,并且允许工具、插件、知识库等不同组件的混合编排。除了自动调度,我们未来还将支持更多的编排模式,例如手动编排、半自动编排,为开发者提供更大的灵活性。

🍑 2.3.2 丰富的组件库
  • 🍒预置工具:只需一行代码,即可加载使用星河社区工具中心的30+预置工具。这些工具当前主要来自百度AI开发平台和飞桨特色PP系列模型。后续,我们会持续接入更多预置工具,也欢迎社区贡献。此外,工具模块也支持用户灵活自定义本地和远程工具。

  • 🍒知识库:提供了开箱即用的基于文心百中的平台化知识库, 并允许开发者在二次开发的场景下使用langchain、llama_index等主流开源库作为知识库。

  • 🍒文心一言插件:未来将支持通过调用文心一言插件商城中的插件(开发中)

🍑 2.3.3 低开发门槛
  • 🍓 零代码界面:依托星河社区提供了零代码界面的智能体构建工具,通过简单的点击配置即可开发AI原生应用。

  • 🍓 简洁的代码:10行代码就可以快速开发一个智能体应用。

  • 🍓 预置资源与平台支持:大量的预置工具、平台级别的知识库,以及后续将推出的平台级别的记忆机制,都旨在加速开发过程。

In [ ]

# 下载数据并解压
!wget https://paddlenlp.bj.bcebos.com/models/community/Salesforce/blip-image-captioning-large/data.zip
# 将data文件解压至clip_interrogator目录下
!unzip -d clip_interrogator data.zip

In [ ]

# 使用clip_interrogator获取图像信息
from PIL import Image
from clip_interrogator import Config, Interrogator


def catch_information(image_path):
    image = Image.open(image_path).convert('RGB')
    ci = Interrogator(Config(clip_pretrained_model_name_or_path="openai/clip-vit-large-patch14"))
    return ci.interrogate_fast(image)

catch_information('example.jpg')

a potted plant sitting on top of a table, unique pot made for houseplants, green vines, house plants, green plant, ivy vines, jungle vines, green walls, twisting leaves, leaves and vines, houseplants, lush plant growth, green wall, vines, limbs made from vines, fine foliage lace, green plants, green jungle, monstera

In [9]

import os

os.environ["EB_AGENT_LOGGING_LEVEL"] = "INFO"  #  这个是日志包
os.environ["EB_AGENT_ACCESS_TOKEN"] = "xxxxxxxxxxxxxxxxxxx"  #  这是星河社区的token(令牌)输入你自己的token

import asyncio

from pydantic import Field
from typing import Dict, Type, Any

from erniebot_agent.chat_models import ERNIEBot
from erniebot_agent.memory import HumanMessage, SlidingWindowMemory, WholeMemory
from erniebot_agent.tools.base import Tool
from erniebot_agent.tools.schema import ToolParameterView
from erniebot_agent.agents import FunctionAgent
from erniebot_agent.tools import RemoteToolkit
from erniebot_agent.file import GlobalFileManagerHandler

In [10]

# 第一个类其实就是描述这个工具接受的参数。需要注意的是,咱们需要继承官方给的类:ToolParameterView
class GetDataInputView(ToolParameterView):
    need: str = Field(description="图像理解分析")

# 第二个类就是描述response这个变量,这是这个工具输出的参数。
class GetDataOutputView(ToolParameterView):
    response: str = Field(description="图像分析结果")

# 第三个类将输入和输出,以及工具的描述组合起来,调用文心大模型实现对单词的翻译,以及近义词等
class GetInformation(Tool):
    description: str = "GetInformation是一款获取图像信息的工具"
    input_type: Type[ToolParameterView] = GetDataInputView
    ouptut_type: Type[ToolParameterView] = GetDataOutputView

    def __init__(self, llm: ERNIEBot, image_path):
        self.llm = llm
        self.data = catch_information(image_path)

    async def __call__(self, need: str) -> Dict[str, str]:
        form = self.data
        prompt = f"请根据{need}对{form}中的图像信息进行翻译,并且筛选出符合图像的关键信息。"
        response = await self.llm.chat([HumanMessage(prompt)])
        return {"response": response.content}

# 这是设置大模型的角色,按官方的方式设置就行了,没啥可多讲的
SYSTEM_MESSAGE = "你是一个翻译兼信息筛选专家,请根据原始获取的图像信息,输出专业的翻译和图像信息筛选。"
llm = ERNIEBot(model="ernie-4.0", system=SYSTEM_MESSAGE)

In [11]

# 和第一个工具一样的方法,在此不赘述
class GetPlanInputView(ToolParameterView):
    recommendation: str = Field(description="图像信息筛选结果")

class GetPlanOutputView(ToolParameterView):
    response: str = Field(description="营销文案生成")

class CatchInformation(Tool):
    description: str = "CatchInformation是一款专业且贴心的营销文案生成的工具"
    input_type: Type[ToolParameterView] = GetPlanInputView
    ouptut_type: Type[ToolParameterView] = GetPlanOutputView

    def __init__(self, llm: ERNIEBot):
        self.llm = llm

    async def __call__(self, recommendation: str) -> Dict[str, str]:
        prompt = f"请根据图像信息筛选结果:\n{recommendation}\n输出符合图像信息的营销文案,请一步一步思考后输出结果。"
        response = await self.llm.chat([HumanMessage(prompt)])
        return {"response": response.content}

SYSTEM_MESSAGE = "你是一个营销文案写作助手,你的任务是根据图像信息筛选结果输出符合图像信息的营销文案。请注意,你只需要输出符合图像信息的营销文案,不要输出多余的解释。"
llm = ERNIEBot(model="ernie-4.0", system=SYSTEM_MESSAGE)
# 将这个工具实例化,等待调用
plan_tool = CatchInformation(llm)

In [12]

memory = SlidingWindowMemory(max_round=10)
llm_final = ERNIEBot(model="ernie-4.0", api_type="aistudio", enable_multi_step_tool_call=True)
image_path  = '1.jpg'
response = await FunctionAgent(llm=llm_final, tools=[GetInformation(llm, image_path), plan_tool], memory=memory, max_steps=10).run("为1.jpg生成朋友圈文案")
print(response.text)
[2024-03-21 20:31:19,637] [    INFO] - Found /home/aistudio/.paddlenlp/models/Salesforce/blip-image-captioning-large/config.json
[2024-03-21 20:31:19,639] [    INFO] - Loading configuration file /home/aistudio/.paddlenlp/models/Salesforce/blip-image-captioning-large/config.json
[2024-03-21 20:31:19,747] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/Salesforce/blip-image-captioning-large/model_state.pdparams
[2024-03-21 20:31:19,749] [    INFO] - Loading weights file model_state.pdparams from cache at /home/aistudio/.paddlenlp/models/Salesforce/blip-image-captioning-large/model_state.pdparams
[2024-03-21 20:31:21,163] [    INFO] - Loaded weights file from disk, setting weights to model.
[2024-03-21 20:31:23,942] [ WARNING] - Some weights of the model checkpoint at Salesforce/blip-image-captioning-large were not used when initializing BlipForConditionalGeneration: ['text_decoder.cls.predictions.decoder.bias', 'text_decoder.cls.predictions.decoder.weight']
- This IS expected if you are initializing BlipForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BlipForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[2024-03-21 20:31:23,943] [ WARNING] - Some weights of BlipForConditionalGeneration were not initialized from the model checkpoint at Salesforce/blip-image-captioning-large and are newly initialized: ['text_decoder.cls.predictions.decoder_weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[2024-03-21 20:31:23,952] [    INFO] - Found /home/aistudio/.paddlenlp/models/Salesforce/blip-image-captioning-large/preprocessor_config.json
[2024-03-21 20:31:23,953] [    INFO] - loading configuration file from cache at /home/aistudio/.paddlenlp/models/Salesforce/blip-image-captioning-large/preprocessor_config.json
[2024-03-21 20:31:23,954] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/Salesforce/blip-image-captioning-large/vocab.txt
[2024-03-21 20:31:23,955] [    INFO] - Downloading https://bj.bcebos.com/paddlenlp/models/community/Salesforce/blip-image-captioning-large/added_tokens.json and saved to /home/aistudio/.paddlenlp/models/Salesforce/blip-image-captioning-large
[2024-03-21 20:31:24,009] [ WARNING] - file<https://bj.bcebos.com/paddlenlp/models/community/Salesforce/blip-image-captioning-large/added_tokens.json> not exist
[2024-03-21 20:31:24,011] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/Salesforce/blip-image-captioning-large/special_tokens_map.json
[2024-03-21 20:31:24,012] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/Salesforce/blip-image-captioning-large/tokenizer_config.json
[2024-03-21 20:31:24,029] [    INFO] - Assigning [DEC] to the bos_token key of the tokenizer
[2024-03-21 20:31:24,030] [    INFO] - Adding [DEC] to the vocabulary
[2024-03-21 20:31:24,030] [    INFO] - Assigning ['[ENC]'] to the additional_special_tokens key of the tokenizer
[2024-03-21 20:31:24,031] [    INFO] - Adding [ENC] to the vocabulary
[2024-03-21 20:31:24,084] [    INFO] - Found /home/aistudio/.paddlenlp/models/openai/clip-vit-large-patch14/config.json
[2024-03-21 20:31:24,087] [    INFO] - Loading configuration file /home/aistudio/.paddlenlp/models/openai/clip-vit-large-patch14/config.json
[2024-03-21 20:31:24,090] [    INFO] - Model config CLIPConfig {
  "_name_or_path": "clip-vit-large-patch14/",
  "architectures": [
    "CLIPModel"
  ],
  "initializer_factor": 1.0,
  "logit_scale_init_value": 2.6592,
  "model_type": "clip",
  "paddlenlp_version": null,
  "projection_dim": 768,
  "return_dict": true,
  "text_config": {
    "_name_or_path": "",
    "add_cross_attention": false,
    "architectures": null,
    "attention_dropout": 0.0,
    "bad_words_ids": null,
    "bos_token_id": 49406,
    "chunk_size_feed_forward": 0,
    "classifier_dropout": null,
    "cross_attention_hidden_size": null,
    "decoder_start_token_id": null,
    "diversity_penalty": 0.0,
    "do_sample": false,
    "dropout": 0.0,
    "dtype": "float32",
    "early_stopping": false,
    "encoder_no_repeat_ngram_size": 0,
    "eos_token_id": 49407,
    "exponential_decay_length_penalty": null,
    "finetuning_task": null,
    "forced_bos_token_id": null,
    "forced_eos_token_id": null,
    "hidden_act": "quick_gelu",
    "hidden_size": 768,
    "id2label": {
      "0": "LABEL_0",
      "1": "LABEL_1"
    },
    "initializer_factor": 1.0,
    "initializer_range": 0.02,
    "intermediate_size": 3072,
    "is_decoder": false,
    "is_encoder_decoder": false,
    "label2id": {
      "LABEL_0": 0,
      "LABEL_1": 1
    },
    "layer_norm_eps": 1e-05,
    "length_penalty": 1.0,
    "max_length": 20,
    "max_position_embeddings": 77,
    "min_length": 0,
    "model_type": "clip_text_model",
    "no_repeat_ngram_size": 0,
    "num_attention_heads": 12,
    "num_beam_groups": 1,
    "num_beams": 1,
    "num_choices": null,
    "num_hidden_layers": 12,
    "num_return_sequences": 1,
    "output_attentions": false,
    "output_hidden_states": false,
    "output_scores": false,
    "pad_token_id": 1,
    "paddlenlp_version": null,
    "prefix": null,
    "problem_type": null,
    "projection_dim": 768,
    "pruned_heads": {},
    "remove_invalid_values": false,
    "repetition_penalty": 1.0,
    "return_dict": true,
    "return_dict_in_generate": false,
    "sep_token_id": null,
    "task_specific_params": null,
    "temperature": 1.0,
    "tensor_parallel_degree": -1,
    "tensor_parallel_output": false,
    "tensor_parallel_rank": 0,
    "tie_encoder_decoder": false,
    "tie_word_embeddings": true,
    "tokenizer_class": null,
    "top_k": 50,
    "top_p": 1.0,
    "typical_p": 1.0,
    "use_cache": false,
    "vocab_size": 49408
  },
  "text_config_dict": {
    "hidden_size": 768,
    "intermediate_size": 3072,
    "num_attention_heads": 12,
    "num_hidden_layers": 12,
    "projection_dim": 768
  },
  "transformers_version": null,
  "vision_config": {
    "_name_or_path": "",
    "add_cross_attention": false,
    "architectures": null,
    "attention_dropout": 0.0,
    "bad_words_ids": null,
    "bos_token_id": null,
    "chunk_size_feed_forward": 0,
    "classifier_dropout": null,
    "cross_attention_hidden_size": null,
    "decoder_start_token_id": null,
    "diversity_penalty": 0.0,
    "do_sample": false,
    "dropout": 0.0,
    "dtype": "float32",
    "early_stopping": false,
    "encoder_no_repeat_ngram_size": 0,
    "eos_token_id": null,
    "exponential_decay_length_penalty": null,
    "finetuning_task": null,
    "forced_bos_token_id": null,
    "forced_eos_token_id": null,
    "hidden_act": "quick_gelu",
    "hidden_size": 1024,
    "id2label": {
      "0": "LABEL_0",
      "1": "LABEL_1"
    },
    "image_size": 224,
    "initializer_factor": 1.0,
    "initializer_range": 0.02,
    "intermediate_size": 4096,
    "is_decoder": false,
    "is_encoder_decoder": false,
    "label2id": {
      "LABEL_0": 0,
      "LABEL_1": 1
    },
    "layer_norm_eps": 1e-05,
    "length_penalty": 1.0,
    "max_length": 20,
    "min_length": 0,
    "model_type": "clip_vision_model",
    "no_repeat_ngram_size": 0,
    "num_attention_heads": 16,
    "num_beam_groups": 1,
    "num_beams": 1,
    "num_channels": 3,
    "num_choices": null,
    "num_hidden_layers": 24,
    "num_return_sequences": 1,
    "output_attentions": false,
    "output_hidden_states": false,
    "output_scores": false,
    "pad_token_id": null,
    "paddlenlp_version": null,
    "patch_size": 14,
    "prefix": null,
    "problem_type": null,
    "projection_dim": 768,
    "pruned_heads": {},
    "remove_invalid_values": false,
    "repetition_penalty": 1.0,
    "return_dict": true,
    "return_dict_in_generate": false,
    "sep_token_id": null,
    "task_specific_params": null,
    "temperature": 1.0,
    "tensor_parallel_degree": -1,
    "tensor_parallel_output": false,
    "tensor_parallel_rank": 0,
    "tie_encoder_decoder": false,
    "tie_word_embeddings": true,
    "tokenizer_class": null,
    "top_k": 50,
    "top_p": 1.0,
    "typical_p": 1.0,
    "use_cache": false
  },
  "vision_config_dict": {
    "hidden_size": 1024,
    "intermediate_size": 4096,
    "num_attention_heads": 16,
    "num_hidden_layers": 24,
    "patch_size": 14,
    "projection_dim": 768
  }
}

[2024-03-21 20:31:24,211] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/openai/clip-vit-large-patch14/model_state.pdparams
[2024-03-21 20:31:24,213] [    INFO] - Loading weights file model_state.pdparams from cache at /home/aistudio/.paddlenlp/models/openai/clip-vit-large-patch14/model_state.pdparams
[2024-03-21 20:31:25,715] [    INFO] - Loaded weights file from disk, setting weights to model.
[2024-03-21 20:31:27,720] [    INFO] - All model checkpoint weights were used when initializing CLIPModel.

[2024-03-21 20:31:27,721] [    INFO] - All the weights of CLIPModel were initialized from the model checkpoint at openai/clip-vit-large-patch14.
If your task is similar to the task the model of the checkpoint was trained on, you can already use CLIPModel for predictions without further training.
[2024-03-21 20:31:27,742] [    INFO] - Found /home/aistudio/.paddlenlp/models/openai/clip-vit-large-patch14/preprocessor_config.json
[2024-03-21 20:31:27,744] [    INFO] - loading configuration file from cache at /home/aistudio/.paddlenlp/models/openai/clip-vit-large-patch14/preprocessor_config.json
[2024-03-21 20:31:27,744] [    INFO] - size should be a dictionary on of the following set of keys: ({'height', 'width'}, {'shortest_edge'}, {'shortest_edge', 'longest_edge'}), got 224. Converted to {'shortest_edge': 224}.
[2024-03-21 20:31:27,745] [    INFO] - crop_size should be a dictionary on of the following set of keys: ({'height', 'width'}, {'shortest_edge'}, {'shortest_edge', 'longest_edge'}), got 224. Converted to {'height': 224, 'width': 224}.
[2024-03-21 20:31:27,746] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/openai/clip-vit-large-patch14/vocab.json
[2024-03-21 20:31:27,747] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/openai/clip-vit-large-patch14/merges.txt
[2024-03-21 20:31:27,747] [    INFO] - Downloading https://bj.bcebos.com/paddlenlp/models/community/openai/clip-vit-large-patch14/added_tokens.json and saved to /home/aistudio/.paddlenlp/models/openai/clip-vit-large-patch14
[2024-03-21 20:31:27,801] [ WARNING] - file<https://bj.bcebos.com/paddlenlp/models/community/openai/clip-vit-large-patch14/added_tokens.json> not exist
[2024-03-21 20:31:27,802] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/openai/clip-vit-large-patch14/special_tokens_map.json
[2024-03-21 20:31:27,803] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/openai/clip-vit-large-patch14/tokenizer_config.json
100%|██████████| 55/55 [00:00<00:00, 272.77it/s]
INFO - [Run][Start] FunctionAgent is about to start running with input:
为1.jpg生成朋友圈文案
INFO - [LLM][Start] ERNIEBot is about to start running with input:
 role: user 
 content: 为1.jpg生成朋友圈文案 
INFO - [LLM][End] ERNIEBot finished running with output:
 role: assistant 
 function_call: 
{
  "name": "GetInformation",
  "thoughts": "用户需要为1.jpg生成朋友圈文案,我需要先获取图像信息,然后筛选出推荐信息,最后生成朋友圈文案。任务拆解:[sub-task1: 使用[GetInformation]获取图像信息,sub-task2: 使用[CatchInformation]筛选推荐信息,sub-task3: 使用[GetFinalAnswer]生成朋友圈文案]。接下来我需要调用[GetInformation]获取图像信息。",
  "arguments": "{\"need\":\"1.jpg\"}"
} 
INFO - [Tool][Start] GetInformation is about to start running with input:
{
  "need": "1.jpg"
}
INFO - [Tool][End] GetInformation finished running with output:
{
  "response": "根据图像信息“1.jpg”的描述,这张图片似乎包含了音乐符号、销售网站上的图片、商店网站上的图片、千葉雄大的元素、商店网站上的精美图片、促销艺术品、产品广告、数字广告、商业横幅、前能游戏卡、促销艺术以及Muqi创作的广告、网站横幅和促销元素。\n\n符合这些图像信息的营销文案可能包括:\n\n- “感受音乐的魅力,探索销售网站的独家优惠!”\n- “千葉雄大推荐,商店网站上的精选图片等你发现!”\n- “精美图片,促销艺术品,尽在我们的销售网站!”\n- “数字广告引领潮流,商业横幅展示实力!”\n- “Muqi设计的网站横幅,为您的产品增添无限魅力!”\n- “前能游戏卡助力促销,让您的广告更具吸引力!”\n- “探索我们的商店网站,发现更多促销艺术和精美图片!”\n\n请注意,由于我无法实际查看“1.jpg”图像,以上文案是基于您提供的图像信息描述进行推测和创作的。实际文案可能需要根据图像的具体内容进行调整。"
}
INFO - [LLM][Start] ERNIEBot is about to start running with input:
 role: function 
 name: GetInformation 
 content: {"response": "根据图像信息“1.jpg”的描述,这张图片似乎包含了音乐符号、销售网站上的图片、商店网站上的图片、千葉雄大的元素、商店网站上的精美图片、促销艺术品、产品广告、数字广告、商业... 
INFO - [LLM][End] ERNIEBot finished running with output:
 role: assistant 
 function_call: 
{
  "name": "CatchInformation",
  "thoughts": "我已经获取了图像信息,现在我需要使用[CatchInformation]筛选出推荐信息。接下来我需要调用[CatchInformation]筛选推荐信息。",
  "arguments": "{\"recommendation\":\"感受音乐的魅力,探索销售网站的独家优惠!千葉雄大推荐,商店网站上的精选图片等你发现!精美图片,促销艺术品,尽在我们的销售网站!数字广告引领潮流,商业横幅展示实力!Muqi设计的网站横幅,为您的产品增添无限魅力!前能游戏卡助力促销,让您的广告更具吸引力!探索我们的商店网站,发现更多促销艺术和精美图片!\"}"
} 
INFO - [Tool][Start] CatchInformation is about to start running with input:
{
  "recommendation": "感受音乐的魅力,探索销售网站的独家优惠!千葉雄大推荐,商店网站上的精选图片等你发现!精美图片,促销艺术品,尽在我们的销售网站!数字广告引领潮流,商业横幅展示实力!Muqi设计的网站横幅,为您的产品增添无限魅力!前能游戏卡助力促销,让您的广告更具吸引力!探索我们的商店网站,发现更多促销艺术和精美图片!"
}
INFO - [Tool][End] CatchInformation finished running with output:
{
  "response": "根据您的图像信息筛选结果,我为您构思了以下符合要求的营销文案:\n\n1. “感受音乐的魅力,尽享独家优惠!千葉雄大力荐,精选好货等你发现!”\n2. “探索销售网站,发现艺术之美!精美图片,促销臻品,一网打尽!”\n3. “数字广告新潮流,商业横幅展风采!Muqi设计,为您的产品锦上添花!”\n4. “游戏卡助力促销,广告更具吸引力!一卡在手,优惠我有!”\n5. “发现商店网站的无限魅力,促销艺术与精美图片应有尽有!”\n\n这些文案既符合您提供的图像信息,又能够吸引潜在客户的注意力,希望对您有所帮助!"
}
INFO - [LLM][Start] ERNIEBot is about to start running with input:
 role: function 
 name: CatchInformation 
 content: {"response": "根据您的图像信息筛选结果,我为您构思了以下符合要求的营销文案:\n\n1. “感受音乐的魅力,尽享独家优惠!千葉雄大力荐,精选好货等你发现!”\n2. “探索销售网站,发现... 
INFO - [LLM][End] ERNIEBot finished running with output:
 role: assistant 
 content: 根据您的需求,我已经为您生成了符合1.jpg图像信息的朋友圈文案。以下是推荐的文案列表:

1. 感受音乐的魅力,尽享独家优惠!千葉雄大力荐,精选好货等你发现!
2. 探索销售网站,发现艺术之美!精美... 
INFO - [Run][End] FunctionAgent finished running.
根据您的需求,我已经为您生成了符合1.jpg图像信息的朋友圈文案。以下是推荐的文案列表:

1. 感受音乐的魅力,尽享独家优惠!千葉雄大力荐,精选好货等你发现!
2. 探索销售网站,发现艺术之美!精美图片,促销臻品,一网打尽!
3. 数字广告新潮流,商业横幅展风采!Muqi设计,为您的产品锦上添花!
4. 游戏卡助力促销,广告更具吸引力!一卡在手,优惠我有!
5. 发现商店网站的无限魅力,促销艺术与精美图片应有尽有!

希望这些文案能够满足您的需求,让您的朋友圈更加吸引人。如果您需要进一步操作或有其他问题,请随时告诉我。

根据您的需求,我已经为您生成了符合1.jpg图像信息的朋友圈文案。以下是推荐的文案列表:

  1. 感受音乐的魅力,尽享独家优惠!千葉雄大力荐,精选好货等你发现!

  2. 探索销售网站,发现艺术之美!精美图片,促销臻品,一网打尽!

  3. 数字广告新潮流,商业横幅展风采!Muqi设计,为您的产品锦上添花!

  4. 游戏卡助力促销,广告更具吸引力!一卡在手,优惠我有!

  5. 发现商店网站的无限魅力,促销艺术与精美图片应有尽有!

希望这些文案能够满足您的需求,让您的朋友圈更加吸引人。如果您需要进一步操作或有其他问题,请随时告诉我。

  • 27
    点赞
  • 30
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

军哥说AI

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值