profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/yijingru/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

yijingru/BBAVectors-Oriented-Object-Detection 254

[WACV2021] Oriented Object Detection in Aerial Images with Box Boundary-Aware Vectors

yijingru/KG_Instance_Segmentation 67

[MICCAI 2019] Multi-scale Cell Instance Segmentation with Keypoint Graph based Bounding Boxes

yijingru/ASSD-Pytorch 35

[CVIU 2019] ASSD learns to highlight useful regions on the feature maps while suppressing the irrelevant information, thereby providing reliable guidance for object detection.

yijingru/Vertebra-Landmark-Detection 32

[ISBI 2020] Vertebra-Focused Landmark Detection for Scoliosis Assessment

yijingru/ANCIS-Pytorch 27

[Medical Image Analysis 2019] Attentive Neural Cell Instance Segmentation

yijingru/CRNCIS-Pytorch 5

[ISBI 2019] Context-refined neural cell instance segmentation

yijingru/examples 0

A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.

yijingru/fedlearn-algo 0

Fedlearn支持前沿算法研发的Python工具库 | Fedlearn algorithm toolkit for researchers

issue commentyijingru/BBAVectors-Oriented-Object-Detection

The actual number of training samples(部分.txt中为空,所以实际训练样本只有3万多)

The negative images have been involved in training. I guess you mean the label is empty?

Lg955

comment created time in 2 days

issue commentyijingru/BBAVectors-Oriented-Object-Detection

测试卡着

训练慢是正常的,因为DOTA数据集比较大,每一个epoch大概20几分钟,一共80个,大概要一天多一点,能训练起来就不碍事。至于那个测试,我搞错了他的本意,那就只是一个可视化的效果,按任意键可以播放下一张图片的可视化。如果要得到标签就得去main eval方法。我开始折腾了半天,谁想到test和eval被我理解错了,尴尬。

所以那个test就相当于inference;那个eval就是使用DOTA-1.0里面的test数据(没有标签的)进行eval,再把结果提交到DOTA网站得到mAP是吧?

我还有一个疑问:代码中生成的trainval.txt只包含了DOTA-V1.0中的train数据还是把train和val两部分都包含了?因为我在train.py中发现作者并没有在每个epoch训练完之后进行一次验证,那就相当于要把所有epoch的结果都eval一遍提交到DOTA网站才能知道哪个是最好的?

你也可以拆分一个小的validation数据集进行验证得到最佳的训练procedure,再一起train, 但是可能validation数据分布可能会有bias。

yzk-lab

comment created time in 4 days

issue commentyijingru/BBAVectors-Oriented-Object-Detection

No module named 'datasets.DOTA_devkit'

https://github.com/CAPTAIN-WHU/DOTA_devkit

zhaowendao30

comment created time in 4 days

issue commentyijingru/BBAVectors-Oriented-Object-Detection

train loss 很低

Overfit?

jianminglv20

comment created time in 8 days

issue commentyijingru/BBAVectors-Oriented-Object-Detection

Shape error on Custom Data

You may output all tensors' shapes in loss.py:88 to locate the error.

Arka161

comment created time in 12 days

fork yijingru/fedlearn-algo

Fedlearn支持前沿算法研发的Python工具库 | Fedlearn algorithm toolkit for researchers

fork in 12 days

startedfedlearnAI/fedlearn-algo

started time in 12 days

issue commentyijingru/BBAVectors-Oriented-Object-Detection

Train on custom data

Hi, I think I didn't really use 'difficulty'. You may remove the key or set it to any number.

shiliu0111

comment created time in 20 days

issue commentyijingru/BBAVectors-Oriented-Object-Detection

Code style

Thanks! I will consider improving the code.

kleinicke

comment created time in 21 days

issue commentyijingru/BBAVectors-Oriented-Object-Detection

Setting up DOTA for training and evaluation

Of course you can. You can send pull request and I will review and merge. Thanks for your contribution!

Arka161

comment created time in 25 days

issue commentyijingru/BBAVectors-Oriented-Object-Detection

Setting up DOTA for training and evaluation

It is image split label. check this one https://github.com/CAPTAIN-WHU/DOTA_devkit#usage

Arka161

comment created time in a month

issue commentyijingru/BBAVectors-Oriented-Object-Detection

RGB vs BGR images for DOTA

Hi, I think I didn't do re-ordering and normalization because with Batch Normalization it would be fine. But your point is great, it would make more stable training and it may reduce the loss NAN problem. Thanks!

batic

comment created time in a month

issue commentyijingru/BBAVectors-Oriented-Object-Detection

Setting up DOTA for training and evaluation

https://github.com/yijingru/BBAVectors-Oriented-Object-Detection#about-dota may help

Arka161

comment created time in a month

issue commentyijingru/BBAVectors-Oriented-Object-Detection

NMS

You may want to check here https://github.com/yijingru/BBAVectors-Oriented-Object-Detection/blob/7efd410f5f4ded94aca986aeb0bd1292235d1314/func_utils.py#L89

Mantha27

comment created time in a month

issue commentyijingru/BBAVectors-Oriented-Object-Detection

作者,你好,关于hm loss 训练25个epoch 以后会出现nan的情况

batch size过小会引发这个问题,猜测可能是batch图片分布非常不相似,类别不平衡,有一些图片甚至都没有物体。一般是训练后期才出现nan,猜测可能与adam有关

njauwhr

comment created time in a month

issue commentyijingru/BBAVectors-Oriented-Object-Detection

How to print mAP or AP of each object category after Testing

You may want to look into (1) eval.py (mAP for dota is provided by the online server) (2) test.py

MFalam

comment created time in a month

issue commentyijingru/BBAVectors-Oriented-Object-Detection

two typos in the code

Thanks for your help! Those issues are corrected.

kleinicke

comment created time in 2 months

push eventyijingru/BBAVectors-Oriented-Object-Detection

yijingru

commit sha db2bae456d8533b2cd01b955cb832af7baca518b

update test.py and base.py

view details

Jingru Yi

commit sha 7efd410f5f4ded94aca986aeb0bd1292235d1314

Merge branch 'master' of https://github.com/yijingru/BBAVectors-Oriented-Object-Detection

view details

push time in 2 months

issue commentyijingru/BBAVectors-Oriented-Object-Detection

Training from scratch = no detections

Did you use the 600x600 cropped images?

Mantha27

comment created time in 2 months

push eventyijingru/BBAVectors-Oriented-Object-Detection

yijingru

commit sha 9343222f24f511cb9eb2c58a8c84da0c6576b11b

Update README.md

view details

push time in 2 months

issue commentyijingru/BBAVectors-Oriented-Object-Detection

The way predict bounding box with diagnoal line.

I tried diagonal vectors before, they can work but I didn't compare the performances. I think top/left/right/bottom vectors are easier to capture as they locate on the objects.

igo312

comment created time in 2 months

issue commentyijingru/BBAVectors-Oriented-Object-Detection

testing issue

As described in paper input images have two scales 0.5 and 1. The trainval set and testing set contain 69,337 and 35,777 images after the cropping, respectively.

minsu-kim320

comment created time in 3 months

push eventyijingru/ObjGuided-Instance-Segmentation

Jingru Yi

commit sha 71e39f84aada581743a5d65f103e63ba0fcc8a9a

add link

view details

push time in 3 months