本文围绕八度卷积(OctConv)展开,解读其分离高低频特征图以降冗余的原理,并用PaddlePaddle复现OctConv及基于它的Oct-Mobilnetv1、OctResNet。在Cifar10数据集上实验,对比原模型,Oct版本参数量略增,训练精度相近,评估精度稍降,验证了其可行性。
☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜
八度卷积于2019年在论文《Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convol》提出,在当时引起了不小的反响。八度卷积对传统的convolution进行改进,以降低空间冗余。其中“Drop an Octave”指降低八个音阶,代表频率减半。
不同于传统的卷积,八度卷积主要针对图像的高频信号与低频信号。首先,我们回忆一下数字图像处理中的高频信号与低频信号的概念。图像中的低频信号和高频信号也叫做低频分量和高频分量。图像中的高频分量,指的是图像强度(亮度/灰度)变化剧烈的像素点,例如边缘(轮廓)、图像的细节处、噪声(即噪点)(该点之所以称为噪点,是因为它与周边像素点灰度值有明显差别,也就是图像强度有剧烈的变化,所以噪声是高频部分)。图像中的低频分量,指的是图像强度(亮度/灰度)变换平缓的像素点,例如大片色块的地方。例如当我们在读书的时候,我们会聚焦于书上的文字而不是书纸本身,这里的文字就是高频分量,白纸即为低频分量。
下图是论文中给出的例子,左图是原图,中图表示低频信号,右图表示高频信号。
在论文中,作者提出较高的频率通常用精细的细节编码,较低的频率通常用全局结构编码。所以作者认为那么既然图像分为高低频,那么卷积产生的特征图自然也存在高低频之分。在图像处理中,模型通过高频特征图去学习图像包含的信息,因为它包含了轮廓、边缘等的信息,有助于进行显著性检测。相反,低频特征图包含的信息较少。如果我们用相同的处理方法来处理高频特征图和低频特征图,显然,前者的效益是远大于后者的。这就是特征图的冗余信息:包含信息较少的低频部分。所以在论文中作者提出了一种分而治之的方法,称之为Octave Feature Representation,对高频特征图与低频特征图分离开来进行处理。如下图所示,作者将低频特征图的分辨率降为1/2,这不仅有助于减少冗余数据,还有利于得到全局信息。
根据尺度空间理念,我们可以知道特征具有尺度不变性和旋转不变性。
为了同时做到同一频率内的更新和不同频率之间的交流,卷积核分成四部分:
下图直观地展示了八度卷积的卷积核,可以看出四个部分共同组成了大小为 k*k 的卷积核。其中,in和out分别表示输入和输出特征图的相关属性,在这篇文章中,输入的低频占比、通道数量都和输出的一致。
在了解了卷积核之后,下面介绍输入如何进行八度卷积操作得到输出结果。如下图所示,低频和高频的输入经过八度卷积操作得到了低频和高频的输出。红色表示高频,蓝色表示低频。绿色的箭头表示同一频率内的更新,红色的箭头表示不同频率之间的交流。
H和W分别表示特征图的长宽,可以看出低频特征图的长宽都是高频特征图的一半。因为分辨率不同,所以不同频率之间交流之前需要进行分辨率的调整:高频到低频需要进行池化(降采样)操作;低频到高频需要进行上采样操作。
In [ ]
import paddleimport paddle.nn as nnimport mathclass OctaveConv(nn.Layer):
def __init__(self, in_channels, out_channels, kernel_size, alpha_in=0.5, alpha_out=0.5, stride=1, padding=0, dilation=1,
groups=1, bias=False):
super(OctaveConv, self).__init__()
self.downsample = nn.AvgPool2D(kernel_size=(2, 2), stride=2)
self.upsample = nn.Upsample(scale_factor=2, mode='nearest') assert stride == 1 or stride == 2, "Stride should be 1 or 2."
self.stride = stride
self.is_dw = groups == in_channels assert 0 <= alpha_in <= 1 and 0 <= alpha_out <= 1, "Alphas should be in the interval from 0 to 1."
self.alpha_in, self.alpha_out = alpha_in, alpha_out
self.conv_l2l = None if alpha_in == 0 or alpha_out == 0 else \
nn.Conv2D(int(alpha_in * in_channels), int(alpha_out * out_channels),
kernel_size, 1, padding, dilation, math.ceil(alpha_in * groups))
self.conv_l2h = None if alpha_in == 0 or alpha_out == 1 or self.is_dw else \
nn.Conv2D(int(alpha_in * in_channels), out_channels - int(alpha_out * out_channels),
kernel_size, 1, padding, dilation, groups)
self.conv_h2l = None if alpha_in == 1 or alpha_out == 0 or self.is_dw else \
nn.Conv2D(in_channels - int(alpha_in * in_channels), int(alpha_out * out_channels),
kernel_size, 1, padding, dilation, groups)
self.conv_h2h = None if alpha_in == 1 or alpha_out == 1 else \
nn.Conv2D(in_channels - int(alpha_in * in_channels), out_channels - int(alpha_out * out_channels),
kernel_size, 1, padding, dilation, math.ceil(groups - alpha_in * groups)) def forward(self, x):
x_h, x_l = x if type(x) is tuple else (x, None)
x_h = self.downsample(x_h) if self.stride == 2 else x_h
x_h2h = self.conv_h2h(x_h)
x_h2l = self.conv_h2l(self.downsample(x_h)) if self.alpha_out > 0 and not self.is_dw else None
if x_l is not None:
x_l2l = self.downsample(x_l) if self.stride == 2 else x_l
x_l2l = self.conv_l2l(x_l2l) if self.alpha_out > 0 else None
if self.is_dw: return x_h2h, x_l2l else:
x_l2h = self.conv_l2h(x_l)
x_l2h = self.upsample(x_l2h) if self.stride == 1 else x_l2h
x_h = x_l2h + x_h2h
x_l = x_h2l + x_l2l if x_h2l is not None and x_l2l is not None else None
return x_h, x_l else: return x_h2h, x_h2lclass Conv_BN(nn.Layer):
def __init__(self, in_channels, out_channels, kernel_size, alpha_in=0.5, alpha_out=0.5, stride=1, padding=0, dilation=1,
groups=1, bias=False, norm_layer=nn.BatchNorm2D):
super(Conv_BN, self).__init__()
self.conv = OctaveConv(in_channels, out_channels, kernel_size, alpha_in, alpha_out, stride, padding, dilation,
groups, bias)
self.bn_h = None if alpha_out == 1 else norm_layer(int(out_channels * (1 - alpha_out)))
self.bn_l = None if alpha_out == 0 else norm_layer(int(out_channels * alpha_out)) def forward(self, x):
x_h, x_l = self.conv(x)
x_h = self.bn_h(x_h)
x_l = self.bn_l(x_l) if x_l is not None else None
return x_h, x_lclass Conv_BN_ACT(nn.Layer):
def __init__(self, in_channels=3, out_channels=32, kernel_size=3, alpha_in=0.5, alpha_out=0.5, stride=1, padding=0, dilation=1,
groups=1, bias=False, norm_layer=nn.BatchNorm2D, activation_layer=nn.ReLU):
super(Conv_BN_ACT, self).__init__()
self.conv = OctaveConv(in_channels, out_channels, kernel_size, alpha_in, alpha_out, stride, padding, dilation,
groups, bias)
self.bn_h = None if alpha_out == 1 else norm_layer(int(out_channels * (1 - alpha_out)))
self.bn_l = None if alpha_out == 0 else norm_layer(int(out_channels * alpha_out))
self.act = activation_layer() def forward(self, x):
x_h, x_l = self.conv(x)
x_h = self.act(self.bn_h(x_h))
x_l = self.act(self.bn_l(x_l)) if x_l is not None else None
return x_h, x_l
Oct-Mobilnetv1的复现即将Mobilnetv1中的原始的Conv2D替换为Oct-Conv,其他均保持不变,在后面打印了Oct-Mobilnetv1的网络结构以及参数量,方便大家查看。
In [ ]# Oct-Mobilnetv1import paddle.nn as nn __all__ = ['oct_mobilenet']def conv_bn(inp, oup, stride): return nn.Sequential( nn.Conv2D(inp, oup, 3, stride, 1), nn.BatchNorm2D(oup), nn.ReLU() )def conv_dw(inp, oup, stride, alpha_in=0.5, alpha_out=0.5): return nn.Sequential( Conv_BN_ACT(inp, inp, kernel_size=3, stride=stride, padding=1, groups=inp, \ alpha_in=alpha_in, alpha_out=alpha_in if alpha_out != alpha_in else alpha_out), Conv_BN_ACT(inp, oup, kernel_size=1, alpha_in=alpha_in, alpha_out=alpha_out) )class OctMobileNet(nn.Layer): def __init__(self, num_classes=1000): super(OctMobileNet, self).__init__() self.features = nn.Sequential( conv_bn( 3, 32, 2), conv_dw( 32, 64, 1, 0, 0.5), conv_dw( 64, 128, 2), conv_dw(128, 128, 1), conv_dw(128, 256, 2), conv_dw(256, 256, 1), conv_dw(256, 512, 2), conv_dw(512, 512, 1), conv_dw(512, 512, 1), conv_dw(512, 512, 1), conv_dw(512, 512, 1), conv_dw(512, 512, 1, 0.5, 0), conv_dw(512, 1024, 2, 0, 0), conv_dw(1024, 1024, 1, 0, 0), ) self.avgpool = nn.AdaptiveAvgPool2D((1, 1)) self.fc = nn.Linear(1024, num_classes) def forward(self, x): x_h, x_l = self.features(x) x = self.avgpool(x_h) x = x.reshape([-1, 1024]) x = self.fc(x) return xdef oct_mobilenet(**kwargs): """ Constructs a Octave MobileNet V1 model """ return OctMobileNet(**kwargs)
Oct-ResNet的复现即将ResNet中的原始的Conv2D替换为Oct-Conv,其他均保持不变,在后面打印了Oct-ResNet的网络结构以及参数量,方便大家查看。
In [ ]import paddle.nn as nn
__all__ = ['OctResNet', 'oct_resnet50', 'oct_resnet101', 'oct_resnet152', 'oct_resnet200']class Bottleneck(nn.Layer):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, alpha_in=0.5, alpha_out=0.5, norm_layer=None, output=False):
super(Bottleneck, self).__init__() if norm_layer is None:
norm_layer = nn.BatchNorm2D
width = int(planes * (base_width / 64.)) * groups # Both self.conv2 and self.downsample layers downsample the input when stride != 1
self.conv1 = Conv_BN_ACT(inplanes, width, kernel_size=1, alpha_in=alpha_in, alpha_out=alpha_out, norm_layer=norm_layer)
self.conv2 = Conv_BN_ACT(width, width, kernel_size=3, stride=stride, padding=1, groups=groups, norm_layer=norm_layer,
alpha_in=0 if output else 0.5, alpha_out=0 if output else 0.5)
self.conv3 = Conv_BN(width, planes * self.expansion, kernel_size=1, norm_layer=norm_layer,
alpha_in=0 if output else 0.5, alpha_out=0 if output else 0.5)
self.relu = nn.ReLU()
self.downsample = downsample
self.stride = stride def forward(self, x):
identity_h = x[0] if type(x) is tuple else x
identity_l = x[1] if type(x) is tuple else None
x_h, x_l = self.conv1(x)
x_h, x_l = self.conv2((x_h, x_l))
x_h, x_l = self.conv3((x_h, x_l)) if self.downsample is not None:
identity_h, identity_l = self.downsample(x)
x_h += identity_h
x_l = x_l + identity_l if identity_l is not None else None
x_h = self.relu(x_h)
x_l = self.relu(x_l) if x_l is not None else None
return x_h, x_lclass OctResNet(nn.Layer):
def __init__(self, block, layers, num_classes=1000, zero_init_residual=False,
groups=1, width_per_group=64, norm_layer=None):
super(OctResNet, self).__init__() if norm_layer is None:
norm_layer = nn.BatchNorm2D
self.inplanes = 64
self.groups = groups
self.base_width = width_per_group
self.conv1 = nn.Conv2D(3, self.inplanes, kernel_size=7, stride=2, padding=3,
)
self.bn1 = norm_layer(self.inplanes)
self.relu = nn.ReLU()
self.maxpool = nn.MaxPool2D(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0], norm_layer=norm_layer, alpha_in=0)
self.layer2 = self._make_layer(block, 128, layers[1], stride=2, norm_layer=norm_layer)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2, norm_layer=norm_layer)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2, norm_layer=norm_layer, alpha_out=0, output=True)
self.avgpool = nn.AdaptiveAvgPool2D((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes) def _make_layer(self, block, planes, blocks, stride=1, alpha_in=0.5, alpha_out=0.5, norm_layer=None, output=False):
if norm_layer is None:
norm_layer = nn.BatchNorm2D
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
Conv_BN(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, alpha_in=alpha_in, alpha_out=alpha_out)
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, self.groups,
self.base_width, alpha_in, alpha_out, norm_layer, output))
self.inplanes = planes * block.expansion for _ in range(1, blocks):
layers.append(block(self.inplanes, planes, groups=self.groups,
base_width=self.base_width, norm_layer=norm_layer,
alpha_in=0 if output else 0.5, alpha_out=0 if output else 0.5, output=output)) return nn.Sequential(*layers) def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x_h, x_l = self.layer1(x)
x_h, x_l = self.layer2((x_h,x_l))
x_h, x_l = self.layer3((x_h,x_l))
x_h, x_l = self.layer4((x_h,x_l))
x = self.avgpool(x_h)
x = x.reshape([x.shape[0], -1])
x = self.fc(x) return xdef oct_resnet50(pretrained=False, **kwargs):
"""Constructs a Octave ResNet-50 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = OctResNet(Bottleneck, [3, 4, 6, 3], **kwargs) return modeldef oct_resnet101(pretrained=False, **kwargs):
"""Constructs a Octave ResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = OctResNet(Bottleneck, [3, 4, 23, 3], **kwargs) return modeldef oct_resnet152(pretrained=False, **kwargs):
"""Constructs a Octave ResNet-152 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = OctResNet(Bottleneck, [3, 8, 36, 3], **kwargs) return modeldef oct_resnet200(pretrained=False, **kwargs):
"""Constructs a Octave ResNet-200 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = OctResNet(Bottleneck, [3, 24, 36, 3], **kwargs) return model
实验数据:Cifar10
CIFAR-10 是由 Hinton 的学生 Alex Krizhevsky 和 Ilya Sutskever 整理的一个用于识别普适物体的小型数据集。一共包含 10 个类别的 RGB 彩色图 片:飞机( a叩lane )、汽车( automobile )、鸟类( bird )、猫( cat )、鹿( deer )、狗( dog )、蛙类( frog )、马( horse )、船( ship )和卡车( truck )。图片的尺寸为 32×32 ,数据集中一共有 50000 张训练圄片和 10000 张测试图片。 CIFAR-10 的图片样例如图所示。
Octmobilnet_model = oct_mobilenet(num_classes=10)# inputs = paddle.randn((1, 2, 224, 224))# print(model(inputs))paddle.summary(Octmobilnet_model,(16,3,224,224))
import paddlefrom paddle.metric import Accuracyfrom paddle.vision.transforms import Compose, Normalize, Resize, Transpose, ToTensor
callback = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir_octmobilenet')
normalize = Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5],
data_format='HWC')
transform = Compose([ToTensor(), Normalize(), Resize(size=(224,224))])
cifar10_train = paddle.vision.datasets.Cifar10(mode='train',
transform=transform)
cifar10_test = paddle.vision.datasets.Cifar10(mode='test',
transform=transform)# 构建训练集数据加载器train_loader = paddle.io.DataLoader(cifar10_train, batch_size=768, shuffle=True, drop_last=True)# 构建测试集数据加载器test_loader = paddle.io.DataLoader(cifar10_test, batch_size=768, shuffle=True, drop_last=True)
Octmobilnet_model = paddle.Model(oct_mobilenet(num_classes=10))
optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=Octmobilnet_model.parameters())
Octmobilnet_model.prepare(
optim,
paddle.nn.CrossEntropyLoss(),
Accuracy()
)
Octmobilnet_model.fit(train_data=train_loader,
eval_data=test_loader,
epochs=12,
callbacks=callback,
verbose=1
)
from paddle.vision.models import MobileNetV1 mobile_model = MobileNetV1(num_classes=10)# inputs = paddle.randn((1, 2, 224, 224))# print(model(inputs))paddle.summary(mobile_model,(16,3,224,224))
import paddlefrom paddle.metric import Accuracyfrom paddle.vision.transforms import Compose, Normalize, Resize, Transpose, ToTensor
callback = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir_mobilenet')
normalize = Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5],
data_format='HWC')
transform = Compose([ToTensor(), Normalize(), Resize(size=(224,224))])
cifar10_train = paddle.vision.datasets.Cifar10(mode='train',
transform=transform)
cifar10_test = paddle.vision.datasets.Cifar10(mode='test',
transform=transform)# 构建训练集数据加载器train_loader = paddle.io.DataLoader(cifar10_train, batch_size=768, shuffle=True, drop_last=True)# 构建测试集数据加载器test_loader = paddle.io.DataLoader(cifar10_test, batch_size=768, shuffle=True, drop_last=True)
mobile_model = paddle.Model(MobileNetV1(num_classes=10))
optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=mobile_model.parameters())
mobile_model.prepare(
optim,
paddle.nn.CrossEntropyLoss(),
Accuracy()
)
mobile_model.fit(train_data=train_loader,
eval_data=test_loader,
epochs=12,
callbacks=callback,
verbose=1
)
octresnet50_model = oct_resnet50(num_classes=10) paddle.summary(octresnet50_model,(16,3,224,224))
import paddlefrom paddle.metric import Accuracyfrom paddle.vision.transforms import Compose, Normalize, Resize, Transpose, ToTensor
callback = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir_octresnet')
normalize = Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5],
data_format='HWC')
transform = Compose([ToTensor(), Normalize(), Resize(size=(224,224))])
cifar10_train = paddle.vision.datasets.Cifar10(mode='train',
transform=transform)
cifar10_test = paddle.vision.datasets.Cifar10(mode='test',
transform=transform)# 构建训练集数据加载器train_loader = paddle.io.DataLoader(cifar10_train, batch_size=256, shuffle=True, drop_last=True)# 构建测试集数据加载器test_loader = paddle.io.DataLoader(cifar10_test, batch_size=256, shuffle=True, drop_last=True)
oct_resnet50 = paddle.Model(oct_resnet50(num_classes=10))
optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=oct_resnet50.parameters())
oct_resnet50.prepare(
optim,
paddle.nn.CrossEntropyLoss(),
Accuracy()
)
oct_resnet50.fit(train_data=train_loader,
eval_data=test_loader,
epochs=12,
callbacks=callback,
verbose=1
)
import paddle# build modelresmodel = resnet50(num_classes=10) paddle.summary(resmodel,(16,3,224,224))
import paddlefrom paddle.metric import Accuracyfrom paddle.vision.transforms import Compose, Normalize, Resize, Transpose, ToTensorfrom paddle.vision.models import resnet50
callback = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir_resnet')
normalize = Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5],
data_format='HWC')
transform = Compose([ToTensor(), Normalize(), Resize(size=(224,224))])
cifar10_train = paddle.vision.datasets.Cifar10(mode='train',
transform=transform)
cifar10_test = paddle.vision.datasets.Cifar10(mode='test',
transform=transform)# 构建训练集数据加载器train_loader = paddle.io.DataLoader(cifar10_train, batch_size=256, shuffle=True, drop_last=True)# 构建测试集数据加载器test_loader = paddle.io.DataLoader(cifar10_test, batch_size=256, shuffle=True, drop_last=True)
res_model = paddle.Model(resnet50(num_classes=10))
optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=res_model.parameters())
res_model.prepare(
optim,
paddle.nn.CrossEntropyLoss(),
Accuracy()
)
res_model.fit(train_data=train_loader,
eval_data=test_loader,
epochs=12,
callbacks=callback,
verbose=1
)
本小节提供消融实验的结果以及可视化训练结果,共计包含四个实验,分别为octmobinetv1、mobinetv1、octresnet50以及resnet50在数据集Cifar10上的结果对比。
| model | epoch | train acc | eval acc | Total params |
|---|---|---|---|---|
| octmobinetv1 | 12 | 0.9609 | 0.6952 | 3253898 |
| mobinetv1 | 12 | 0.9488 | 0.7289 | 3239114 |
| model | epoch | train acc | eval acc | Total params |
|---|---|---|---|---|
| octresnet50 | 12 | 0.9613 | 0.7915 | 23625674 |
| resnet50 | 12 | 0.9524 | 0.7939 | 23581642 |
# ai
# red
# paddlepaddle
# 加载
# 不变性
# 可以看出
# 所示
# 在后面
# 新和
# 较少
# 指的是
# 图像处理
# 长宽
相关栏目:
【
Google疑问12 】
【
Facebook疑问10 】
【
网络优化91478 】
【
技术知识72672 】
【
云计算0 】
【
GEO优化84317 】
【
优选文章0 】
【
营销推广36048 】
【
网络运营41350 】
【
案例网站102563 】
【
AI智能45237 】
相关推荐:
谷歌 Gemini AI 助手详解:功能、应用与隐私设置
New You KIN Skin Analyzer:焕发肌肤新生的终极指南
tofai官网入口网站 tofai官网入口网页版
Midjourney怎样生成网页_Midjourney生成网页教程【方法】
使用文心一言进行中文客服话术库的逻辑优化
百度ai助手工具栏怎么关 百度ai助手状态栏隐藏
教你用AI帮你进行论文选题,快速找到有研究价值的方向
揭秘颜值真相:社交实验的背后,你是几分?
Feelin网页版在线玩 Feelin角色扮演网页版入口
Tamilnad Mercantile Bank TMB:如何在线下载账户报表
百度APP的ai助手怎么关闭 百度APP ai功能取消方法
Google NotebookLM:科研文献综述的免费AI工具
Base44 AI应用构建器深度评测:Wix 8000万美元收购的秘密
Gemini手机端怎么发图片_Gemini手机端发图方法【步骤】
Midjourney怎么用一键生成漫画_Midjourney漫画生成方法【攻略】
Miaoaotalk 猫语翻译器测评:宠物沟通新体验?
千问怎么用提示词生成演讲稿_千问演讲稿提示词框架与开场【教程】
教你用AI一键去除图片水印,操作简单效果惊人
百度ai助手通知栏怎么关 百度ai助手通知消息屏蔽
利用 ChatGPT 设计高效的个人健身与饮食计划
RPGGO AI:颠覆传统!2D游戏创作新纪元
豆包AI怎么评价回答的好坏_点赞与反馈功能使用教程
AI时代设计师生存指南:职业发展、技能提升与未来趋势
通义听悟转会议纪要怎么用_通义听悟转会议纪要使用方法详细指南【教程】
YOU.com AI搜索引擎:Python代码示例及使用指南
智谱清言分析数据怎么用_智谱清言分析数据使用方法详细指南【教程】
教你用AI将一篇长文自动拆解成社交媒体帖子,实现一文多发
宝可梦朱紫:如何高效刷闪异色宝可梦,提升游戏体验
寓言故事:狮子与老鼠,学习英语的趣味童话之旅
DeepSeek 辅助进行 Linux 内核参数调优教程
ChatGPT怎么设置中文界面_ChatGPT中文设置步骤【方法】
AI症状自检:最佳AI症状检查器,告别网络庸医!
AI电子书写作终极指南:ChatGPT和Canva实战教程
AI聊天机器人会取代人类吗?深度剖析与未来展望
佐糖AI抠图如何免费使用_佐糖AI免费额度获取与消耗查看【指南】
AI驱动SaaS增长:AppSumo $700万美金业务增长策略揭秘
阿里通义app怎么用_阿里通义app使用方法详细指南【教程】
AI视频播客制作终极指南:告别繁琐编辑,轻松发布!
雷小兔ai智能写作如何生成文案_雷小兔ai智能写作文案生成场景选择【攻略】
AI赋能:五款颠覆性工具助你在线赚钱
如何用AI帮你进行竞品功能对比分析?轻松制作对比矩阵
10平米房间设计终极挑战:人类 vs AI,DIY极简主义胜出!
AI驱动的潜在客户挖掘:15分钟搭建营销机构并获利
GitHub Copilot CLI:终端中的 AI 编码助手
Universe:用 iPhone 在 5 分钟内打造网站的终极指南
AI语音生成指南:免费工具、变现实战与避坑策略
Gemini怎样连接Google账号_Gemini账号连接方法【方法】
AI驱动的自动化工作流:Zapier、Perplexity和Claude集成指南
AI邮件营销风险解析:如何规避客户触达的潜在陷阱
谷歌 Nano Banana:免费AI图像生成的强大工具
2025-08-01
南京市珐之弘网络技术有限公司专注海外推广十年,是谷歌推广.Facebook广告全球合作伙伴,我们精英化的技术团队为企业提供谷歌海外推广+外贸网站建设+网站维护运营+Google SEO优化+社交营销为您提供一站式海外营销服务。