持续创造,加快成长!这是我参与「日新方案 · 6 月更文应战」的第26天,点击查看活动详情

ShowMeAI日报系列全新升级!掩盖AI人工智能 东西&结构 | 项目&代码 | 博文&共享 | 数据&资源 | 研讨&论文 等方向。点击查看 历史文章列表,在大众号内订阅论题 #ShowMeAI资讯日报,可接纳每日最新推送。点击 专题合辑&电子月刊 快速浏览各专题全集。

1.东西&结构

东西库:text_analysis_tools – 中文文本剖析东西包

tags: [文本剖析,文本分类,文本聚类,要害词,情感剖析,文本纠错]

东西包功用掩盖:文本分类、文本聚类、文本相似性、要害词抽取、要害短语抽取、情感剖析、文本纠错、文本摘要、主题要害词、近义词、近义词、事情三元组抽取

GitHub: github.com/murray-z/te…

东西:Redpanda Console – 开源的数据流处理东西

tags: [数据流]

Redpanda Console(前身是 Kowl),调配可视化 UI,可用于快速办理和调试 Kafka/Redpanda 作业负载。

GitHub: github.com/redpanda-da…

东西言语:Enso – 具有视觉和文本双重标明的交互式编程言语

tags: [核算机视觉,自然言语处理,编程言语]

‘Enso – Hybrid visual and textual functional programming.’

GitHub: github.com/enso-org/en…

东西结构:nvim-compleet – Neovim自动补全结构

tags: [代码补全,自动补全]

‘nvim-compleet – A Neovim autocompletion framework written in Rust’ by Riccardo Mazzarini

GitHub: github.com/noib3/nvim-…

东西模型:ClipCap-Chinese – 根据ClipCap的看图说话Image Caption模型

tags: [看图说话,图画字幕,image caption]

GitHub: github.com/yangjianxin…

东西库:Dagger – 用于CI/CD pipeline的可移植开发套件

tags: [CI/CD,可移植开发]

‘Dagger – A portable devkit for CI/CD pipelines’

GitHub: github.com/dagger/dagg…

2.项目&代码

Kaggle机器学习 / 数据科学 / 数据可视化 / 深度学习入门 Notebook引荐

tags: [数据科学,机器学习,数据可视化,深度学习,notebook,代码]

Data ScienceTutorial for Beginners

www.kaggle.com/code/kannca…

Machine Learning Tutorial for Beginners

www.kaggle.com/code/kannca…

Python Data Visualizations

www.kaggle.com/code/benham…

Scikit-Learn ML from Start to Finish

www.kaggle.com/code/jeffd2…

TensorFlow deep NN

www.kaggle.com/code/kakaua…

3.博文&共享

资源共享:《怎么做研讨》北京交通大学体系与网络试验室收集的关于科研学习攻略的材料。

tags: [科研,研讨,攻略]

GitHub: fangvv.github.io/Homepage/ex…

4.数据&资源

数据集:UrbanNav – 开源城市定位算法基准测验多感官数据集

tags: [数据集,城市数据,定位数据]

‘UrbanNav:An Open-sourced Multisensory Dataset for Benchmarking Positioning Algorithms Designed for Urban Areas’ by PolyU Intelligent Positioning And Navigation Lab

GitHub: github.com/IPNL-POLYU/…

资源列表:以人为本AI阅读清单,重点是核算机视觉

tags: [AI资源,核算机视觉]

‘A reading list for some interesting papers on Human-centered AI, with a focus on computer vision – a reading list for human-centered AI’ by human-centeredAI

GitHub: github.com/human-cente…

资源列表:MLOPs Primer – MLOPs入门资源汇编

tags: [MLOP,资源列表]

‘MLOPs Primer – A collection of resources to learn about MLOPs.’ by DAIR.AI

GitHub: github.com/dair-ai/MLO…

5.研讨&论文

大众号回复要害字 日报,免费获取整理好的6月论文合辑。

论文:MAGIC: Microlensing Analysis Guided by Intelligent Computation

论文标题:MAGIC: Microlensing Analysis Guided by Intelligent Computation

论文时刻:16 Jun 2022

所属范畴:时刻序列

对应使命:时刻序列

论文地址:arxiv.org/abs/2206.08…

代码完成:github.com/JasonZHM/ma…

论文作者:Haimeng Zhao, Wei Zhu

论文简介:The key feature of MAGIC is the introduction of neural controlled differential equation, which provides the capability to handle light curves with irregular sampling and large data gaps. / MAGIC 的要害特性是引入了神经操控的微分方程,它供给了处理不规则采样和大数据间隙的光变曲线的能力。

论文摘要:The modeling of binary microlensing light curves via the standard sampling-based method can be challenging, because of the time-consuming light curve computation and the pathological likelihood landscape in the high-dimensional parameter space. In this work, we present MAGIC, which is a machine learning framework to efficiently and accurately infer the microlensing parameters of binary events with realistic data quality. In MAGIC, binary microlensing parameters are divided into two groups and inferred separately with different neural networks. The key feature of MAGIC is the introduction of neural controlled differential equation, which provides the capability to handle light curves with irregular sampling and large data gaps. Based on simulated light curves, we show that MAGIC can achieve fractional uncertainties of a few percent on the binary mass ratio and separation. We also test MAGIC on a real microlensing event. MAGIC is able to locate the degenerate solutions even when large data gaps are introduced. As irregular samplings are common in astronomical surveys, our method also has implications to other studies that involve time series.

由于光变曲线核算耗时且高维参数空间中的病理似然性景观,经过根据规范采样的办法对二元微透镜光变曲线进行建模或许具有应战性。在这项作业中,咱们提出了 MAGIC,它是一个机器学习结构,能够有用、准确地揣度具有实在数据质量的二元事情的微透镜参数。在 MAGIC 中,二元微透镜参数分为两组,分别用不同的神经网络进行揣度。 MAGIC 的首要特点是引入了神经操控的微分方程,它供给了处理不规则采样和大数据间隙的光变曲线的能力。根据模仿的光变曲线,咱们标明 MAGIC 能够在二元质量比和分离上完成百分之几的分数不确定性。咱们还在一个实在的微透镜事情中测验了 MAGIC。即便引入了大的数据间隙,MAGIC 也能够定位退化的解决方案。由于不规则采样在地理调查中很常见,咱们的办法也对其他涉及时刻序列的研讨发生影响。

论文:Efficient Decoder-free Object Detection with Transformers

论文标题:Efficient Decoder-free Object Detection with Transformers

论文时刻:14 Jun 2022

所属范畴:核算机视觉

对应使命:object-detection,Object Detection,方针检测

论文地址:arxiv.org/abs/2206.06…

代码完成:github.com/Pealing/DFF…

论文作者:Peixian Chen, Mengdan Zhang, Yunhang Shen, Kekai Sheng, Yuting Gao, Xing Sun, Ke Li, Chunhua Shen

论文简介:A natural usage of ViTs in detection is to replace the CNN-based backbone with a transformer-based backbone, which is straightforward and effective, with the price of bringing considerable computation burden for inference. / ViTs 在检测中的一个自然用法是用根据transformer的骨干替换根据 CNN 的骨干,这简略有用,但价值是为推理带来了相当大的核算担负。

论文摘要:Vision transformers (ViTs) are changing the landscape of object detection approaches. A natural usage of ViTs in detection is to replace the CNN-based backbone with a transformer-based backbone, which is straightforward and effective, with the price of bringing considerable computation burden for inference. More subtle usage is the DETR family, which eliminates the need for many hand-designed components in object detection but introduces a decoder demanding an extra-long time to converge. As a result, transformer-based object detection can not prevail in large-scale applications. To overcome these issues, we propose a novel decoder-free fully transformer-based (DFFT) object detector, achieving high efficiency in both training and inference stages, for the first time. We simplify objection detection into an encoder-only single-level anchor-based dense prediction problem by centering around two entry points: 1) Eliminate the training-inefficient decoder and leverage two strong encoders to preserve the accuracy of single-level feature map prediction; 2) Explore low-level semantic features for the detection task with limited computational resources. In particular, we design a novel lightweight detection-oriented transformer backbone that efficiently captures low-level features with rich semantics based on a well-conceived ablation study. Extensive experiments on the MS COCO benchmark demonstrate that DFFT_SMALL outperforms DETR by 2.5% AP with 28% computation cost reduction and more than 10x fewer training epochs. Compared with the cutting-edge anchor-based detector RetinaNet, DFFT_SMALL obtains over 5.5% AP gain while cutting down 70% computation cost.

视觉transformer(ViTs)正在改变方针检测办法的格式。 ViT 在检测中的一个自然用途是用根据transformer的骨干替换根据 CNN 的骨干,这简略有用,但价值是为推理带来了相当大的核算担负。更微妙的用法是 DETR 系列,它消除了在对象检测中对许多手工规划组件的需求,但引入了需要超长收敛时刻的解码器。因此,根据 Transformer 的方针检测无法在大规模应用中流行。为了克服这些问题,咱们提出了一种新颖的无解码器完全根据transformer(DFFT)的方针检测器,首次在练习和推理阶段都完成了高效率。咱们经过环绕两个入口点将方针检测简化为仅编码器的根据锚点的单级密布猜测问题:1)消除练习效率低下的解码器并利用两个强壮的编码器来保持单级特征图猜测的准确性; 2)探索核算资源有限的检测使命的初级语义特征。特别是,咱们规划了一种新颖的轻量级面向检测的 Transformer 骨干,它根据精心规划的消融研讨有用地捕获具有丰厚语义的初级特征。在 MS COCO 基准上进行的大量试验标明,DFFT_SMALL 比 DETR 高出 2.5% AP,核算成本降低了 28%,练习 epoch 减少了 10 倍以上。与顶级的根据锚的检测器 RetinaNet 相比,DFFT_SMALL 取得了超越 5.5% 的 AP 增益,一起降低了 70% 的核算成本。

论文:Pythae: Unifying Generative Autoencoders in Python — A Benchmarking Use Case

论文标题:Pythae: Unifying Generative Autoencoders in Python — A Benchmarking Use Case

论文时刻:16 Jun 2022

所属范畴:核算机视觉

对应使命:Image Reconstruction,图画重建

论文地址:arxiv.org/abs/2206.08…

代码完成:github.com/clementchad…

论文作者:Clément Chadebec, Louis J. Vincent, Stéphanie Allassonnière

论文简介:In recent years, deep generative models have attracted increasing interest due to their capacity to model complex distributions. / 近年来,深度生成模型因其对杂乱分布建模的能力而引起了越来越多的爱好。

论文摘要:In recent years, deep generative models have attracted increasing interest due to their capacity to model complex distributions. Among those models, variational autoencoders have gained popularity as they have proven both to be computationally efficient and yield impressive results in multiple fields. Following this breakthrough, extensive research has been done in order to improve the original publication, resulting in a variety of different VAE models in response to different tasks. In this paper we present Pythae, a versatile open-source Python library providing both a unified implementation and a dedicated framework allowing straightforward, reproducible and reliable use of generative autoencoder models. We then propose to use this library to perform a case study benchmark where we present and compare 19 generative autoencoder models representative of some of the main improvements on downstream tasks such as image reconstruction, generation, classification, clustering and interpolation. The open-source library can be found at github.com/clementchad…

近年来,深度生成模型因其对杂乱分布建模的能力而引起了越来越多的爱好。在这些模型中,变分自编码器已广受欢迎,由于它们已被证明具有核算效率并在多个范畴发生了令人印象深入的成果。在这一打破之后,为了改进原始出版物,已经进行了广泛的研讨,然后发生了各种不同的 VAE 模型来响应不同的使命。在本文中,咱们介绍了 Pythae,一个多功用的开源 Python 库,供给一致的完成和专用结构,答应直接、可重复和可靠地运用生成自动编码器模型。然后,咱们建议运用这个库来履行事例研讨基准测验,其间咱们展示并比较了 19 个生成自动编码器模型,这些模型代表了下游使命的一些首要改进,例如图画重建、生成、分类、聚类和插值。能够在 github.com/clementchad… 找到开源库。

论文:Robust deep learning based protein sequence design using ProteinMPNN

论文标题:Robust deep learning based protein sequence design using ProteinMPNN

论文时刻:bioRxiv 2022

所属范畴:医疗

对应使命:Drug Discovery,Protein Folding,Protein Function Prediction,Protein Structure Prediction,药物发现,蛋白质折叠,蛋白质功用猜测,蛋白质结构猜测

论文地址:www.biorxiv.org/content/10.…

代码完成:github.com/dauparas/Pr…

论文作者:J. Dauparas, I. Anishchenko, N. Bennett, H. Bai, R. J. Ragotte, L. F. Milles, B. I. M. Wicky, A. Courbet, R. J. de Haas, N. Bethel, P. J. Y. Leung, T. F. Huddy, S. Pellock, D. Tischer, F. Chan, B. Koepnick, H. Nguyen, A. Kang, B. Sankaran, A. K. Bera, N. P. King, D. Baker

论文简介:While deep learning has revolutionized protein structure prediction, almost all experimentally characterized de novo protein designs have been generated using physically based approaches such as Rosetta. / 尽管深度学习彻底改变了蛋白质结构猜测,但简直一切以试验为特征的从头蛋白质规划都是运用根据物理的办法(如 Rosetta)生成的。

论文摘要:While deep learning has revolutionized protein structure prediction, almost all experimentally characterized de novo protein designs have been generated using physically based approaches such as Rosetta. Here we describe a deep learning based protein sequence design method, ProteinMPNN, with outstanding performance in both in silico and experimental tests. The amino acid sequence at different positions can be coupled between single or multiple chains, enabling application to a wide range of current protein design challenges. On native protein backbones, ProteinMPNN has a sequence recovery of 52.4%, compared to 32.9% for Rosetta. Incorporation of noise during training improves sequence recovery on protein structure models, and produces sequences which more robustly encode their structures as assessed using structure prediction algorithms. We demonstrate the broad utility and high accuracy of ProteinMPNN using X-ray crystallography, cryoEM and functional studies by rescuing previously failed designs, made using Rosetta or AlphaFold, of protein monomers, cyclic homo-oligomers, tetrahedral nanoparticles, and target binding proteins.

尽管深度学习已经彻底改变了蛋白质结构猜测,但简直一切具有试验特征的从头蛋白质规划都是运用根据物理的办法(如 Rosetta)生成的。在这里,咱们描绘了一种根据深度学习的蛋白质序列规划办法,ProteinMPNN,具有超卓的在核算机和试验测验中的体现。不同位置的氨基酸序列能够在单链或多链之间偶联,然后能够应用于当时广泛的蛋白质规划应战。在天然蛋白质骨架上,ProteinMPNN 的序列康复率为 52.4%,而 Rosetta 为 32.9%。在练习过程中参加噪声改善了蛋白质结构模型的序列康复,并发生了更稳健地编码其结构的序列,如运用结构猜测算法评估的那样。咱们运用 X 射线晶体学、cryoEM 和功用研讨经过重做优化曾经失败的规划(运用 Rosetta 或 AlphaFold 制作的蛋白质单体、环状同源低聚物、四面体纳米颗粒和靶结合蛋白)证明了 ProteinMPNN 的广泛实用性和高精度。

论文:Translating Images into Maps

论文标题:Translating Images into Maps

论文时刻:3 Oct 2021

所属范畴:核算机视觉

对应使命:视觉映射

论文地址:arxiv.org/abs/2110.00…

代码完成:github.com/avishkarsah…

论文作者:Avishkar Saha, Oscar Mendez Maldonado, Chris Russell, Richard Bowden

论文简介:We show how a novel form of transformer network can be used to map from images and video directly to an overhead map or bird’s-eye-view (BEV) of the world, in a single end-to-end network. / 咱们展示了怎么运用一种新颖的transformer网络方式,在单个端到端网络中将图画和视频直接映射到国际的俯视图或鸟瞰图 (BEV)。

论文摘要:We approach instantaneous mapping, converting images to a top-down view of the world, as a translation problem. We show how a novel form of transformer network can be used to map from images and video directly to an overhead map or bird’s-eye-view (BEV) of the world, in a single end-to-end network. We assume a 1-1 correspondence between a vertical scanline in the image, and rays passing through the camera location in an overhead map. This lets us formulate map generation from an image as a set of sequence-to-sequence translations. Posing the problem as translation allows the network to use the context of the image when interpreting the role of each pixel. This constrained formulation, based upon a strong physical grounding of the problem, leads to a restricted transformer network that is convolutional in the horizontal direction only. The structure allows us to make efficient use of data when training, and obtains state-of-the-art results for instantaneous mapping of three large-scale datasets, including a 15% and 30% relative gain against existing best performing methods on the nuScenes and Argoverse datasets, respectively. We make our code available on github.com/avishkarsah…

咱们把瞬时映射问题,即“将图画转换为自上而下的视图”,作为一个翻译问题来处理。咱们展示了怎么运用一种新颖的transformer网络方式,在单个端到端网络中将图画和视频直接映射到国际的俯视图或鸟瞰图 (BEV)。咱们假定图画中的垂直扫描线与经过俯视图中的相机位置的光线之间存在 1-1 对应关系。这让咱们能够将图画的映射生成公式化为一组序列到序列的转换。将问题定位为翻译答应网络在解释每个像素的作用时运用图画的上下文。这种根据问题的强壮物理基础的受约束公式会导致受限制的transformer网络仅在水平方向上卷积。该结构使咱们能够在练习时有用地利用数据,并为三个大规模数据集的瞬时映射取得最先进的成果,包含相关于 nuScenes 上现有最佳功用办法的 15% 和 30% 的相对增益和 Argoverse 数据集。咱们在 github.com/avishkarsah… 上供给咱们的代码。

论文:M2DGR: A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots

论文标题:M2DGR: A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots

论文时刻:19 Dec 2021

论文地址:arxiv.org/abs/2112.13…

代码完成:github.com/SJTU-ViSYS/…

论文作者:Jie Yin, Ang Li, Tao Li, Wenxian Yu, Danping Zou

论文简介:We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera, an event camera, a Visual-Inertial Sensor (VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation system with real-time kinematic (RTK) signals. / 咱们介绍 M2DGR:由地面机器人收集的新式大规模数据集,具有完好的传感器套件,包含六个鱼眼和一个指向天空的 RGB 摄像头、一个红外摄像头、一个事情摄像头、一个视觉惯性传感器(VI-sensor)、惯性丈量单元 (IMU)、激光雷达、消费级全球导航卫星体系 (GNSS) 接纳器和具有实时运动 (RTK) 信号的 GNSS-IMU 导航体系。

论文摘要:We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera, an event camera, a Visual-Inertial Sensor (VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation system with real-time kinematic (RTK) signals. All those sensors were well-calibrated and synchronized, and their data were recorded simultaneously. The ground truth trajectories were obtained by the motion capture device, a laser 3D tracker, and an RTK receiver. The dataset comprises 36 sequences (about 1TB) captured in diverse scenarios including both indoor and outdoor environments. We evaluate state-of-the-art SLAM algorithms on M2DGR. Results show that existing solutions perform poorly in some scenarios. For the benefit of the research community, we make the dataset and tools public. The webpage of our project is github.com/SJTU-ViSYS/…

咱们介绍了 M2DGR:一种由地面机器人收集的新式大规模数据集,具有完好的传感器套件,包含六个鱼眼和一个指向天空的 RGB 相机、一个红外相机、一个事情相机、一个视觉惯性传感器(VI-sensor)、惯性丈量单元 (IMU)、激光雷达、消费级全球导航卫星体系 (GNSS) 接纳器和具有实时运动 (RTK) 信号的 GNSS-IMU 导航体系。一切这些传感器都经过了杰出的校准和同步,并且一起记录了它们的数据。地面实况轨道由运动捕捉设备、激光 3D 跟踪器和 RTK 接纳器取得。该数据集包含在包含室内和室外环境在内的不同场景中捕获的 36 个序列(约 1TB)。咱们在 M2DGR 上评估最先进的 SLAM 算法。成果标明,现有解决方案在某些情况下体现不佳。为了研讨社区的利益,咱们将数据集和东西揭露。咱们项目的网页是 github.com/SJTU-ViSYS/…

论文:Combining Label Propagation and Simple Models Out-performs Graph Neural Networks

论文标题:Combining Label Propagation and Simple Models Out-performs Graph Neural Networks

论文时刻:ICLR 2021

所属范畴图算法

对应使命:Node Classification,Node Property Prediction,节点分类,节点属性猜测

论文地址:arxiv.org/abs/2010.13…

代码完成:github.com/CUAI/Correc… , github.com/dmlc/dgl/tr… , github.com/Chillee/Cor… , github.com/sangyx/gtri… , github.com/xnuohz/Corr…

论文作者:Qian Huang, Horace He, Abhay Singh, Ser-Nam Lim, Austin R. Benson

论文简介:Graph Neural Networks (GNNs) are the predominant technique for learning over graphs. / 图神经网络(GNN)是学习图的首要技能。

论文摘要:Graph Neural Networks (GNNs) are the predominant technique for learning over graphs. However, there is relatively little understanding of why GNNs are successful in practice and whether they are necessary for good performance. Here, we show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs by combining shallow models that ignore the graph structure with two simple post-processing steps that exploit correlation in the label structure: (i) an “error correlation” that spreads residual errors in training data to correct errors in test data and (ii) a “prediction correlation” that smooths the predictions on the test data. We call this overall procedure Correct and Smooth (C&S), and the post-processing steps are implemented via simple modifications to standard label propagation techniques from early graph-based semi-supervised learning methods. Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks, with just a small fraction of the parameters and orders of magnitude faster runtime. For instance, we exceed the best known GNN performance on the OGB-Products dataset with 137 times fewer parameters and greater than 100 times less training time. The performance of our methods highlights how directly incorporating label information into the learning algorithm (as was done in traditional techniques) yields easy and substantial performance gains. We can also incorporate our techniques into big GNN models, providing modest gains. Our code for the OGB results is at github.com/Chillee/Cor…

图神经网络(GNN)是图学习的首要技能。然而,关于 GNN 为何在实践中取得成功以及它们是否关于杰出功用是必要的,人们知之甚少。在这里,咱们展示了关于许多规范的转导节点分类基准,咱们能够经过将忽略图结构的浅层模型与两个利用相关性的简略后处理过程相结合来超越或匹配最先进的 GNN 的功用。标签结构:(i) 一种“差错相关性”,将练习数据中的残差传播到纠正测验数据中的错误;(ii) 一种“猜测相关性”,用于滑润对测验数据的猜测。咱们将此整体过程称为正确和滑润 (C&S),后处理过程是经过对早期根据图的半监督学习办法的规范标签传播技能的简略修改来完成的。咱们的办法在各种基准测验中超越或简直与最先进的 GNN 的功用相匹配,却有更小量级的参数和数量级等级的运转推理效率提升。例如,咱们超越了已知最好的GNN在 OGB-Products 数据集上的功用,参数少 137 倍,练习时刻少 100 倍以上。咱们的办法的功用杰出了怎么将标签信息直接结合到学习算法中(就像在传统技能中所做的那样)发生简略而可观的功用提升。咱们还能够将咱们的技能整合到大型 GNN 模型中,供给适度的收益。咱们的 OGB 成果代码位于 github.com/Chillee/Cor…

论文:Multi-Graph Fusion Networks for Urban Region Embedding

论文标题:Multi-Graph Fusion Networks for Urban Region Embedding

论文时刻:24 Jan 2022

所属范畴:图算法

对应使命:Crime Prediction,犯罪猜测

论文地址:arxiv.org/abs/2201.09…

代码完成:github.com/wushangbin/…

论文作者:Shangbin Wu, Xu Yan, Xiaoliang Fan, Shirui Pan, Shichao Zhu, Chuanpan Zheng, Ming Cheng, Cheng Wang

论文简介:Human mobility data contains rich but abundant information, which yields to the comprehensive region embeddings for cross domain tasks. / 人类活动性数据包含丰厚的信息,这发生了跨范畴使命的归纳区域嵌入。

论文摘要:Learning the embeddings for urban regions from human mobility data can reveal the functionality of regions, and then enables the correlated but distinct tasks such as crime prediction. Human mobility data contains rich but abundant information, which yields to the comprehensive region embeddings for cross domain tasks. In this paper, we propose multi-graph fusion networks (MGFN) to enable the cross domain prediction tasks. First, we integrate the graphs with spatio-temporal similarity as mobility patterns through a mobility graph fusion module. Then, in the mobility pattern joint learning module, we design the multi-level cross-attention mechanism to learn the comprehensive embeddings from multiple mobility patterns based on intra-pattern and inter-pattern messages. Finally, we conduct extensive experiments on real-world urban datasets. Experimental results demonstrate that the proposed MGFN outperforms the state-of-the-art methods by up to 12.35% improvement.

从人类活动数据中学习城市区域的嵌入能够提醒区域的功用,然后完成相关但不同的使命,例如犯罪猜测。人类移动数据包含丰厚但丰厚的信息,这发生了跨域使命的归纳区域嵌入。在本文中,咱们提出了多图交融网络(MGFN)来完成跨域猜测使命。首要,咱们经过移动图交融模块将具有时空相似性的图整合为移动形式。然后,在移动形式联合学习模块中,咱们规划了多级穿插留意机制,以根据形式内和形式间消息从多个移动形式中学习归纳嵌入。最后,咱们对实际国际的城市数据集进行了广泛的试验。试验成果标明,所提出的 MGFN 比最先进的办法提高了 12.35%。

咱们是 ShowMeAI,致力于传播AI优质内容,共享行业解决方案,用常识加快每一次技能成长!点击查看 历史文章列表,在大众号内订阅论题 #ShowMeAI资讯日报,可接纳每日最新推送。点击 专题合辑&电子月刊 快速浏览各专题全集。

  • 作者:韩信子@ShowMeAI
  • 历史文章列表
  • 专题合辑&电子月刊
  • 声明:版权一切,转载请联络平台与作者并注明出处
  • 欢迎回复,拜托点赞,留言引荐中有价值的文章、东西或建议,咱们都会赶快回复哒~