Latex的IEEE稿件

\documentclass[conference]{IEEEtran}
\IEEEoverridecommandlockouts
% The preceding line is only needed to identify funding in the first footnote. If that is unneeded, please comment it out.

\usepackage{cite}
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{algorithmic}
\usepackage{graphicx}
\usepackage{textcomp}
\usepackage{xcolor}
\usepackage{multicol}
\usepackage{amsmath}
\usepackage{amssymb}
% \usepackage{hyperref}
\usepackage[implicit=false]{hyperref}
\usepackage{mathrsfs}
% \hypersetup{hidelinks,
% 	colorlinks=true,
% 	allcolors=black,
% 	pdfstartview=Fit,
% 	breaklinks=true}


\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
    T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
    
\begin{document}

\title{Attention-driven Graph Clustering Network\\
{\footnotesize \textsuperscript{*}Note: Sub-titles are not captured in Xplore and
should not be used}
\thanks{Identify applicable funding agency here. If none, delete this.}
}

\author{\IEEEauthorblockN{1\textsuperscript{st} Zhihao Peng}
\IEEEauthorblockA{\textit{City University of Hong Kong)} \\
\textit{Hong Kong SAR}\\
zhihapeng3-c@my.cityu.edu.hk}
\and
\IEEEauthorblockN{2\textsuperscript{nd} Hui Liu}
\IEEEauthorblockA{\textit{City University of Hong Kong} \\
\textit{Hong Kong SAR}\\
hliu99-c@my.cityu.edu.hk}
\and
\IEEEauthorblockN{3\textsuperscript{rd} Yuheng Jia*}
\IEEEauthorblockA{\textit{Southeast University} \\
\textit{China}\\
yhjia@seu.edu.cn}
\and
\IEEEauthorblockN{4\textsuperscript{th} Junhui Hou∗}
\IEEEauthorblockA{\textit{City University of Hong Kong} \\
\textit{Hong Kong SAR}\\
jh.hou@cityu.edu.hk}
}
\maketitle
% \section{abstract}
The combination of the traditional convolutional network (i.e., an
auto-encoder) and the graph convolutional network has attracted
much attention in clustering, in which the auto-encoder extracts
the node attribute feature and the graph convolutional network
captures the topological graph feature. However, the existing works
(i) lack a flexible combination mechanism to adaptively fuse those
two kinds of features for learning the discriminative representation
and (ii)overlook the multi-scale information embedded at different layers for subsequent cluster assignment, leading to inferior
clustering results. To this end, we propose a novel deep clustering method named Attention-driven Graph Clustering Network
(AGCN). Specifically, AGCN exploits a heterogeneity-wise fusion
module to dynamically fuse the node attribute feature and the topological graph feature. Moreover, AGCN develops a scale-wise fusion
module to adaptively aggregate the multi-scale features embedded
at different layers. Based on a unified optimization framework,
AGCN can jointly perform feature learning and cluster assignment
in an unsupervised fashion. Compared with the existing deep clustering methods, our method is more flexible and effective since
it comprehensively considers the numerous and discriminative
information embedded in the network and directly produces the
clustering results. Extensive quantitative and qualitative results
on commonly used benchmark datasets validate that our AGCN
consistently outperforms state-of-the-art methods.


\section{CCS CONCEPTS}
\textbf{Computing methodologies → Cluster analysis.}\par
\begin{IEEEkeywords}
Deep clustering, attention-based mechanism, multi-scale features,
feature fusion
component, formatting, style, styling, insert
\end{IEEEkeywords}


\textbf{ACM Reference Format:}\par
{\footnotesize Zhihao Peng, Hui Liu, Yuheng Jia, and Junhui Hou. 2021. Attention-driven
Graph Clustering Network. In Proceedings of the 29th ACM International
Conference on Multimedia (MM ’21), October 20–24, 2021, Virtual Event, China.ACM, New York, NY, USA, 9 pages. 
https://doi.org/10.1145/3474085.3475276 
}


\underline{\hspace{3cm} }\par
{\scriptsize Yuheng Jia and Junhui Hou are the corresponding authors. This work was supported
by the Hong Kong Research Grants Council under Grant CityU 11219019.}
\underline{\hspace{9cm} }\par
{\scriptsize Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from permissions@acm.org.\par MM ’21, October 20–24, 2021, Virtual Event, China\par
© 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM.\par
https://doi.org/10.1145/3474085.3475276}





\section{Introduction}
  Clustering is a primary yet challenging task in data analysis, aiming to partition similar samples into the same group and dissimilar samples into different groups. Recently, benefiting from the break-throughs in deep learning, numerous deep clustering approaches have achieved state-of-the-art performance in many applications,including anomaly detection [4, 26, 39], signal propagation [10, 13–16, 22], and transfer clustering [7, 29, 31, 32]. The crucial prerequisite of deep clustering is to extract intricate patterns from underlying data for effectively learning the data representation. For example, Hinton et al. [9] drove the representation learning by a
designed auto-encoder network (AE). Xie et al. [41] proposed the deep embedded clustering method (DEC) to learn the feature representation by clustering a set of data points in a jointly optimized feature space. Guo et al. [6] introduced a reconstruction loss to improve DEC for learning a better representation. Although these works have achieved remarkable improvements, they simply focus on the node attribute feature and ignore the topological graph information embedded in the data.\par

As the topological graph information can make a valuable guide on embedding learning, various works [3, 18, 27, 28, 36] have been proposed to introduce the graph convolutional networks (GCNs) to use the topological graph information for learning the graph
structure feature. Specifically, Kipf et al. [18] proposed the graph auto-encoder (GAE) and the variational graph auto-encoder (VGAE) to learn the graph structure feature based on the AE and the variational AE based frameworks, respectively. Furthermore, based on the GAE framework, Pan et al. [27] developed the adversarially
regularized graph auto-encoder network (ARGA) by introducing an adversarial regularizer. Wang et al. [37] combined GAE with
the graph attention network model [36] to encode the topological structure and node contents. Bo et al. [3] designed the structural deep clustering network (SDCN) to integrate the topological graph information into deep clustering based on the DEC framework.However, these existing works naively equate the importance of the topological graph feature and the node attribute feature in
any case, inevitably limiting the representation learning. Moreover,they only consider the latent features extracted from the deepest
layer, neglecting the off-the-shelf yet discriminative multi-scale information embedded in different layers.\par

In this paper, we propose a novel deep clustering method named attention-driven graph clustering network (AGCN) to address the above-mentioned issues. Specifically, AGCN includes two fusion
modules, namely AGCN heterogeneity-wise1 fusion module (AGCN-H) and AGCN scale-wise fusion module (AGCN-S), in which both modules exploit the attention-based mechanism to dynamically measure the importance of the corresponding features for the subsequent feature fusion. AGCN-H adaptively merges the GCN feature and the AE feature from the same layer, while AGCN-S dynamically concatenates the multi-scale features from different layers.
For conducting the training process in an unsupervised fashion, we design a unified learning framework capable of directly producing the clustering results. Extensive quantitative and qualitative comparisons are conducted on six commonly used benchmark datasets to validate the superiority of AGCN over state-of-the-art methods.Furthermore, the ablation studies are performed to validate the efficiency and effectiveness of our approach.\par


\textbf{Notation:} Throughout this paper, scalars are denoted by italic
lower case letters, vectors by bold lower case letters, matrices by upper case ones, and operators by calligraphy ones, respectively.
Let \textbf{V}  be the set of nodes,\textbf{E} be the set of edges between nodes, \textbf{X} \in \(\mathbb{R}^{n \times d} \) be the node attribute matrix , then \( = \textbf{ (V, E, X)}\) denotes the undirected graph. The adjacency matrix \textbf{A} \in \(\mathbb{R}^{n \times n} \) indicates
the topological structure of graph G and the corresponding degree matrix is \textbf{D}  \in \(\mathbb{R} ^ {n \times n}{.\|.\|}_{F} \) denotes the Frobenius norm. The main
notations used throughout the paper are summarized in Table 1.

\section{ RELATED WORK}
{\normal
Recently, many deep clustering methods [1, 6, 7, 20, 30, 41] have been proposed and achieved impressive performance, benefiting from the strong representation power of the deep neural networks.
Auto-encoder (AE) [9] is one of the most commonly used unsupervised deep neural networks, which plays a crucial role in deep clustering. For example, the deep embedded clustering (DEC) [41] used the AE-based framework to learn the deep representations
by Kullback-Leibler (KL) divergence minimization. The improved DEC method (IDEC) [6] promoted the clustering performance of DEC. [7] achieved the deep transfer clustering by simultaneously learning the data representation and clustering the unlabelled data of novel visual categories. [20] incorporated adversarial fairness to complete the group invariant cluster assignment and the structural preservation. However, these methods only focus on learning
the data representation from the samples themselves and overlook the potential valuable graph structure information between data samples.\par
To exploit the structural information underlying the data, some graph convolutional networks (GCNs) based clustering methods were proposed [17, 18, 27, 28, 35–38, 40, 42]. For instance, [18] proposed using the graph auto-encoder (GAE) and the variational
graph auto-encoder (VGAE) to learn the graph-structured data. [37] provided the deep attentional embedded graph clustering network
(DAEGC) to encode the topological structure and node contents in a graph by introducing the attentional neighbor-wise fusion strategy on the GAE framework. The adversarially regularized graph autoencoder (ARGA) [27] further improved the clustering performance by introducing an adversarial learning scheme to learn the graph embedding. [3] designed the structural deep clustering network (SDCN) to integrate the structural information into deep clustering by embedding GCN into the DEC framework.\par
Although the above approaches can improve the clustering performance, they still have the following drawbacks, i.e., (i) naively equating the importance of the topological graph feature and the node attribute feature; (ii) neglecting the multi-scale information embedded in different layers. Accordingly, embedding learning can not effectively and comprehensively exploit the graph structure of data. Moreover, the interaction between the graph structure feature and the node attribute feature is not adequate to a certain extent. As
a result, the fruitful and valuable information is dropping, limiting the performance of the clustering model.\par
}


\begin{tabular}{l |l }
\hline
    Notations & Descriptions \\
     \hline
     
    % X,\widehat{X} & The input data and its reconstructed matrix \\
    
    H & The extracted feature from AE module\\
    A,D & The adjacency matrix and the degree matrix\\
    % $Z_{i}$,$H_{i}$ & The GCN and encoder output from the \[i_{th}\] layer\\
    % \[M_{i}\] &The AGCN-H weight matrix for \[Z_{i}\] ,\[H_{i}\]\\
    % \[Z′_{i}\]& The AGCN-H combined feature for \[Z_{i}\], \[H_{i}\]\\
    % \[U_{i}\],\[u_{i}\] & The AGCN-S weight matrix and its elements\\
    % Z′ & The AGCN-S combined feature\\
    % Z & The soft assignment\\
    % \hline
    % n,l,k & The number of samples, network layers, and clusters\\
    % d,\[d_{i}\] & The dimension of X and the \[i_{th}\] latent feature\\
    % \hline
    % .\|. & the concatenation operation\\
    % \[\|.\|_{F}\]& The Frobenius norm\\
\end{tabular}

\section{ PROPOSED METHOD}
{\normal
In this section, we first describe the details of the proposed attention driven graph clustering network (AGCN) shown in Figure 1, including the heterogeneity-wise fusion module (AGCN-H) and the
scale-wise fusion module (AGCN-S). Then, we introduce the network training process and the computational complexity analysis.
}

\subsection{ AGCN-H}
{\normal
As the graph convolutional network (GCN) can efficiently capture the topological graph information and the auto-encoder (AE) can
reasonably extract the node attribute feature, we propose the AGCN-H module to dynamically combine the GCN feature and the AE feature to learn a more discriminative representation. Specifically,
we exploit the attention-based mechanism with the heterogeneity wise strategy by conducting the attention coefficients learning and the subsequent weighted feature fusion. The corresponding illustration of AGCN-H is shown in Figure 2 (a), and the implementation details are as follows.
First, the encoder-decoder module is used to extract the latent representation by minimizing the reconstruction loss between the raw data and the reconstructed data, i.e.,\par
\[L_{R}= \|X-  \hat{X}\|_{F}^2 \]
\[s.t.{H_{i} =\phi{(W_{i}^e H_{i1} + b_{e}^i) }} \label{1}\]
\[ \hat{H}_{i} = \phi(W_{i}^d \hat{H}_{i-1} + b_{i}^d),i=1,...,l\]
where \textbf{X} \in \[\mathbb{R}^{n \times d} \] denotes the raw data, \widehat{\textbf{X}} \in \[\mathbb{R}^{n \times d} \] denotes the reconstructed data,  \textbf{H_{i}} \in  \(\mathbb{R}^{n \times \widehat{d}_{i} \) denote the encoder and
decoder outputs from the i_{th} layer, respectively. \[ \phi(.)\] denotes the activation function such as Tanh, ReLU [5], etc.\textbf{W_{i}^e} and \textbf{b_{i}^e} denote the network weight and bias of the i_{th} encoder layer, respectively.  \textbf{W_{i}^d} and \textbf{b_{i}^d} denote the network weight and bias of the  i_{th} decoder layer, respectively.Particularly, \textbf{H_{0}}  indicates the raw data \textbf{X} and \textbf{\widehat{H}_{l}} indicates the reconstructed data \textbf{\widehat{X}} 
}


\begin{figure*}
    \centering
    \includegraphics{./image/picture1.png}
    \caption{Caption}
    \label{fig:my_label}Example 1: The following paragraph (given in quotes) is an 
\end{figure*}
\begin{multiclos}
Figure 1: The architecture of the proposed attention-driven graph clustering network (AGCN). X denotes the input data, A
denotes the adjacency matrix, ˆX denotes the reconstructed data, l denotes the number of layers. The upper part is an encoderdecoder (i.e., auto-encoder) module that the latent representation H is extracted by minimizing the reconstruction loss between
X and ˆX. The lower part consists of the proposed AGCN heterogeneity-wise fusion module (AGCN-H) and scale-wise fusion
module (AGCN-S), in which AGCN-H and AGCN-S are designed to achieve the heterogeneous features fusion and the multiscale features fusion, respectively. The network is self-trained by minimizing the KL divergence between the H distribution
(as indicated in orange) and the Z distribution (as indicated in blue).
\end{multiclos}





\section{Ease of Use}

\subsection{Maintaining the Integrity of the Specifications}

The IEEEtran class file is used to format your paper and style the text. All margins, 
column widths, line spaces, and text fonts are prescribed; please do not 
alter them. You may note peculiarities. For example, the head margin
measures proportionately more than is customary. This measurement 
and others are deliberate, using specifications that anticipate your paper 
as one part of the entire proceedings, and not as an independent document. 
Please do not revise any of the current designations.

\section{Prepare Your Paper Before Styling}
Before you begin to format your paper, first write and save the content as a 
separate text file. Complete all content and organizational editing before 
formatting. Please note sections \ref{AA}--\ref{SCM} below for more information on 
proofreading, spelling and grammar.

Keep your text and graphic files separate until after the text has been 
formatted and styled. Do not number text heads---{\LaTeX} will do that 
for you.

\subsection{Abbreviations and Acronyms}\label{AA}
Define abbreviations and acronyms the first time they are used in the text, 
even after they have been defined in the abstract. Abbreviations such as 
IEEE, SI, MKS, CGS, ac, dc, and rms do not have to be defined. Do not use 
abbreviations in the title or heads unless they are unavoidable.

\subsection{Units}
\begin{itemize}
\item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as ``3.5-inch disk drive''.
\item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation.
\item Do not mix complete spellings and abbreviations of units: ``Wb/m\textsuperscript{2}'' or ``webers per square meter'', not ``webers/m\textsuperscript{2}''. Spell out units when they appear in text: ``. . . a few henries'', not ``. . . a few H''.
\item Use a zero before decimal points: ``0.25'', not ``.25''. Use ``cm\textsuperscript{3}'', not ``cc''.)
\end{itemize}

\subsection{Equations}
Number equations consecutively. To make your 
equations more compact, you may use the solidus (~/~), the exp function, or 
appropriate exponents. Italicize Roman symbols for quantities and variables, 
but not Greek symbols. Use a long dash rather than a hyphen for a minus 
sign. Punctuate equations with commas or periods when they are part of a 
sentence, as in:
\begin{equation}
a+b=\gamma\label{eq}
\end{equation}

Be sure that the 
symbols in your equation have been defined before or immediately following 
the equation. Use ``\eqref{eq}'', not ``Eq.~\eqref{eq}'' or ``equation \eqref{eq}'', except at 
the beginning of a sentence: ``Equation \eqref{eq} is . . .''

\subsection{\LaTeX-Specific Advice}

Please use ``soft'' (e.g., \verb|\eqref{Eq}|) cross references instead
of ``hard'' references (e.g., \verb|(1)|). That will make it possible
to combine sections, add equations, or change the order of figures or
citations without having to go through the file line by line.

Please don't use the \verb|{eqnarray}| equation environment. Use
\verb|{align}| or \verb|{IEEEeqnarray}| instead. The \verb|{eqnarray}|
environment leaves unsightly spaces around relation symbols.

Please note that the \verb|{subequations}| environment in {\LaTeX}
will increment the main equation counter even when there are no
equation numbers displayed. If you forget that, you might write an
article in which the equation numbers skip from (17) to (20), causing
the copy editors to wonder if you've discovered a new method of
counting.

{\BibTeX} does not work by magic. It doesn't get the bibliographic
data from thin air but from .bib files. If you use {\BibTeX} to produce a
bibliography you must send the .bib files. 

{\LaTeX} can't read your mind. If you assign the same label to a
subsubsection and a table, you might find that Table I has been cross
referenced as Table IV-B3. 

{\LaTeX} does not have precognitive abilities. If you put a
\verb|\label| command before the command that updates the counter it's
supposed to be using, the label will pick up the last counter to be
cross referenced instead. In particular, a \verb|\label| command
should not go before the caption of a figure or a table.

Do not use \verb|\nonumber| inside the \verb|{array}| environment. It
will not stop equation numbers inside \verb|{array}| (there won't be
any anyway) and it might stop a wanted equation number in the
surrounding equation.

\subsection{Some Common Mistakes}\label{SCM}
\begin{itemize}
\item The word ``data'' is plural, not singular.
\item The subscript for the permeability of vacuum $\mu_{0}$, and other common scientific constants, is zero with subscript formatting, not a lowercase letter ``o''.
\item In American English, commas, semicolons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is cited, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.)
\item A graph within a graph is an ``inset'', not an ``insert''. The word alternatively is preferred to the word ``alternately'' (unless you really mean something that alternates).
\item Do not use the word ``essentially'' to mean ``approximately'' or ``effectively''.
\item In your paper title, if the words ``that uses'' can accurately replace the word ``using'', capitalize the ``u''; if not, keep using lower-cased.
\item Be aware of the different meanings of the homophones ``affect'' and ``effect'', ``complement'' and ``compliment'', ``discreet'' and ``discrete'', ``principal'' and ``principle''.
\item Do not confuse ``imply'' and ``infer''.
\item The prefix ``non'' is not a word; it should be joined to the word it modifies, usually without a hyphen.
\item There is no period after the ``et'' in the Latin abbreviation ``et al.''.
\item The abbreviation ``i.e.'' means ``that is'', and the abbreviation ``e.g.'' means ``for example''.
\end{itemize}
An excellent style manual for science writers is \cite{b7}.

\subsection{Authors and Affiliations}
\textbf{The class file is designed for, but not limited to, six authors.} A 
minimum of one author is required for all conference articles. Author names 
should be listed starting from left to right and then moving down to the 
next line. This is the author sequence that will be used in future citations 
and by indexing services. Names should not be listed in columns nor group by 
affiliation. Please keep your affiliations as succinct as possible (for 
example, do not differentiate among departments of the same organization).

\subsection{Identify the Headings}
Headings, or heads, are organizational devices that guide the reader through 
your paper. There are two types: component heads and text heads.

Component heads identify the different components of your paper and are not 
topically subordinate to each other. Examples include Acknowledgments and 
References and, for these, the correct style to use is ``Heading 5''. Use 
``figure caption'' for your Figure captions, and ``table head'' for your 
table title. Run-in heads, such as ``Abstract'', will require you to apply a 
style (in this case, italic) in addition to the style provided by the drop 
down menu to differentiate the head from the text.

Text heads organize the topics on a relational, hierarchical basis. For 
example, the paper title is the primary text head because all subsequent 
material relates and elaborates on this one topic. If there are two or more 
sub-topics, the next level head (uppercase Roman numerals) should be used 
and, conversely, if there are not at least two sub-topics, then no subheads 
should be introduced.

\subsection{Figures and Tables}
\paragraph{Positioning Figures and Tables} Place figures and tables at the top and 
bottom of columns. Avoid placing them in the middle of columns. Large 
figures and tables may span across both columns. Figure captions should be 
below the figures; table heads should appear above the tables. Insert 
figures and tables after they are cited in the text. Use the abbreviation 
``Fig.~\ref{fig}'', even at the beginning of a sentence.

\begin{table}[htbp]
\caption{Table Type Styles}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\
\cline{2-4} 
\textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\
\hline
copy& More table copy$^{\mathrm{a}}$& &  \\
\hline
\multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.}
\end{tabular}
\label{tab1}
\end{center}
\end{table}

\begin{figure}[htbp]
\centerline{\includegraphics{fig1.png}}
\caption{Example of a figure caption.}
\label{fig}
\end{figure}

Figure Labels: Use 8 point Times New Roman for Figure labels. Use words 
rather than symbols or abbreviations when writing Figure axis labels to 
avoid confusing the reader. As an example, write the quantity 
``Magnetization'', or ``Magnetization, M'', not just ``M''. If including 
units in the label, present them within parentheses. Do not label axes only 
with units. In the example, write ``Magnetization (A/m)'' or ``Magnetization 
\{A[m(1)]\}'', not just ``A/m''. Do not label axes with a ratio of 
quantities and units. For example, write ``Temperature (K)'', not 
``Temperature/K''.

\section*{Acknowledgment}

The preferred spelling of the word ``acknowledgment'' in America is without 
an ``e'' after the ``g''. Avoid the stilted expression ``one of us (R. B. 
G.) thanks $\ldots$''. Instead, try ``R. B. G. thanks$\ldots$''. Put sponsor 
acknowledgments in the unnumbered footnote on the first page.

\section*{References}

Please number citations consecutively within brackets \cite{b1}. The 
sentence punctuation follows the bracket \cite{b2}. Refer simply to the reference 
number, as in \cite{b3}---do not use ``Ref. \cite{b3}'' or ``reference \cite{b3}'' except at 
the beginning of a sentence: ``Reference \cite{b3} was the first $\ldots$''

Number footnotes separately in superscripts. Place the actual footnote at 
the bottom of the column in which it was cited. Do not put footnotes in the 
abstract or reference list. Use letters for table footnotes.

Unless there are six authors or more give all authors' names; do not use 
``et al.''. Papers that have not been published, even if they have been 
submitted for publication, should be cited as ``unpublished'' \cite{b4}. Papers 
that have been accepted for publication should be cited as ``in press'' \cite{b5}. 
Capitalize only the first word in a paper title, except for proper nouns and 
element symbols.

For papers published in translation journals, please give the English 
citation first, followed by the original foreign-language citation \cite{b6}.

\begin{thebibliography}{00}
\bibitem{b1} G. Eason, B. Noble, and I. N. Sneddon, ``On certain integrals of Lipschitz-Hankel type involving products of Bessel functions,'' Phil. Trans. Roy. Soc. London, vol. A247, pp. 529--551, April 1955.
\bibitem{b2} J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68--73.
\bibitem{b3} I. S. Jacobs and C. P. Bean, ``Fine particles, thin films and exchange anisotropy,'' in Magnetism, vol. III, G. T. Rado and H. Suhl, Eds. New York: Academic, 1963, pp. 271--350.
\bibitem{b4} K. Elissa, ``Title of paper if known,'' unpublished.
\bibitem{b5} R. Nicole, ``Title of paper with only first word capitalized,'' J. Name Stand. Abbrev., in press.
\bibitem{b6} Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, ``Electron spectroscopy studies on magneto-optical media and plastic substrate interface,'' IEEE Transl. J. Magn. Japan, vol. 2, pp. 740--741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982].
\bibitem{b7} M. Young, The Technical Writer's Handbook. Mill Valley, CA: University Science, 1989.
\end{thebibliography}
\vspace{12pt}
\color{red}
IEEE conference templates contain guidance text for composing and formatting conference papers. Please ensure that all template text is removed from your conference paper prior to submission to the conference. Failure to remove the template text from your paper may result in your paper not being published.

\end{document}

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值