Skip to content

Commit

Permalink
fix docstring styles (#2634)
Browse files Browse the repository at this point in the history
  • Loading branch information
hellock authored May 6, 2020
1 parent 5035022 commit c77ccbb
Show file tree
Hide file tree
Showing 15 changed files with 126 additions and 98 deletions.
9 changes: 3 additions & 6 deletions mmdet/core/bbox/coder/delta_xywh_bbox_coder.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,9 @@
class DeltaXYWHBBoxCoder(BaseBBoxCoder):
"""Delta XYWH BBox coder
Following the practice in R-CNN [1]_, this coder encodes bbox (x1, y1, x2,
y2) into delta (dx, dy, dw, dh) and decodes delta (dx, dy, dw, dh)
back to original bbox (x1, y1, x2, y2).
References:
.. [1] https://arxiv.org/abs/1311.2524
Following the practice in `R-CNN <https://arxiv.org/abs/1311.2524>`_,
this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh) and
decodes delta (dx, dy, dw, dh) back to original bbox (x1, y1, x2, y2).
Args:
target_means (Sequence[float]): denormalizing means of target for
Expand Down
7 changes: 3 additions & 4 deletions mmdet/core/bbox/coder/tblr_bbox_coder.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,9 @@
class TBLRBBoxCoder(BaseBBoxCoder):
"""TBLR BBox coder
Following the practice in FSAF [1]_, this coder encodes gt bboxes (x1, y1,
x2, y2) into (top, bottom, left, right) and decode it back to the original.
References:
.. [1] https://arxiv.org/abs/1903.00621
Following the practice in `FSAF <https://arxiv.org/abs/1903.00621>`_,
this coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left,
right) and decode it back to the original.
Args:
normalizer (list | float): Normalization factor to be
Expand Down
48 changes: 27 additions & 21 deletions mmdet/core/optimizer/default_constructor.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,31 +13,37 @@
class DefaultOptimizerConstructor(object):
"""Default constructor for optimizers.
Attributes:
By default each parameter share the same optimizer settings, and we
provide an argument ``paramwise_cfg`` to specify parameter-wise settings.
It is a dict and may contain the following fields:
- ``bias_lr_mult`` (float): It will be multiplied to the learning
rate for all bias parameters (except for those in normalization
layers).
- ``bias_decay_mult`` (float): It will be multiplied to the weight
decay for all bias parameters (except for those in
normalization layers and depthwise conv layers).
- ``norm_decay_mult`` (float): It will be multiplied to the weight
decay for all weight and bias parameters of normalization
layers.
- ``dwconv_decay_mult`` (float): It will be multiplied to the weight
decay for all weight and bias parameters of depthwise conv
layers.
- ``bypass_duplicate`` (bool): If true, the duplicate parameters
would not be added into optimizer. Default: False
Args:
model (:obj:`nn.Module`): The model with parameters to be optimized.
optimizer_cfg (dict): The config dict of the optimizer.
Positional fields are:
- type: class name of the optimizer.
Optional fields are:
Positional fields are
- `type`: class name of the optimizer.
Optional fields are
- any arguments of the corresponding optimizer type, e.g.,
lr, weight_decay, momentum, etc.
lr, weight_decay, momentum, etc.
paramwise_cfg (dict, optional): Parameter-wise options.
Accepted fields are
- bias_lr_mult (float): It will be multiplied to the learning
rate for all bias parameters (except for those in normalization
layers).
- bias_decay_mult (float): It will be multiplied to the weight
decay for all bias parameters (except for those in
normalization layers and depthwise conv layers).
- norm_decay_mult (float): It will be multiplied to the weight
decay for all weight and bias parameters of normalization
layers.
- dwconv_decay_mult (float): It will be multiplied to the weight
decay for all weight and bias parameters of depthwise conv
layers.
- bypass_duplicate (bool): If true, the duplicate parameters
would not be added into optimizer. Default: False
Example:
>>> model = torch.nn.modules.Conv1d(1, 1, 1)
Expand Down
2 changes: 1 addition & 1 deletion mmdet/datasets/custom.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ class CustomDataset(Dataset):
The annotation format is shown as follows. The `ann` field is optional for
testing.
.. code-block::
.. code-block:: none
[
{
Expand Down
13 changes: 7 additions & 6 deletions mmdet/datasets/pipelines/transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,12 +32,13 @@ class Resize(object):
`img_scale` can either be a tuple (single-scale) or a list of tuple
(multi-scale). There are 3 multiscale modes:
- `ratio_range` is not None: randomly sample a ratio from the ratio range
and multiply it with the image scale.
- `ratio_range` is None and `multiscale_mode` == "range": randomly sample a
scale from the a range.
- `ratio_range` is None and `multiscale_mode` == "value": randomly sample a
scale from multiple scales.
- ``ratio_range is not None``: randomly sample a ratio from the ratio range
and multiply it with the image scale.
- ``ratio_range is None`` and ``multiscale_mode == "range"``: randomly
sample a scale from the a range.
- ``ratio_range is None`` and ``multiscale_mode == "value"``: randomly
sample a scale from multiple scales.
Args:
img_scale (tuple or list[tuple]): Images scales for resizing.
Expand Down
62 changes: 32 additions & 30 deletions mmdet/models/backbones/resnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -308,11 +308,12 @@ class ResNet(nn.Module):
freeze running stats (mean and var). Note: Effect on Batch Norm
and its variants only.
plugins (list[dict]): List of plugins for stages, each dict contains:
cfg (dict, required): Cfg dict to build plugin.
position (str, required): Position inside block to insert plugin,
options: 'after_conv1', 'after_conv2', 'after_conv3'.
stages (tuple[bool], optional): Stages to apply plugin, length
should be same as 'num_stages'
- cfg (dict, required): Cfg dict to build plugin.
- position (str, required): Position inside block to insert
plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'.
- stages (tuple[bool], optional): Stages to apply plugin, length
should be same as 'num_stages'.
with_cp (bool): Use checkpoint or not. Using checkpoint will save some
memory while slowing down the training speed.
zero_init_residual (bool): Whether to use zero init for last norm layer
Expand Down Expand Up @@ -434,34 +435,38 @@ def make_stage_plugins(self, plugins, stage_idx):
'empirical_attention_block', 'nonlocal_block' into the backbone like
ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of
Bottleneck.
An example of plugins format could be :
>>> plugins=[
... dict(cfg=dict(type='xxx', arg1='xxx'),
... stages=(False, True, True, True),
... position='after_conv2'),
... dict(cfg=dict(type='yyy'),
... stages=(True, True, True, True),
... position='after_conv3'),
... dict(cfg=dict(type='zzz', postfix='1'),
... stages=(True, True, True, True),
... position='after_conv3'),
... dict(cfg=dict(type='zzz', postfix='2'),
... stages=(True, True, True, True),
... position='after_conv3')
... ]
>>> self = ResNet(depth=18)
>>> stage_plugins = self.make_stage_plugins(plugins, 0)
>>> assert len(stage_plugins) == 3
An example of plugins format could be:
>>> plugins=[
... dict(cfg=dict(type='xxx', arg1='xxx'),
... stages=(False, True, True, True),
... position='after_conv2'),
... dict(cfg=dict(type='yyy'),
... stages=(True, True, True, True),
... position='after_conv3'),
... dict(cfg=dict(type='zzz', postfix='1'),
... stages=(True, True, True, True),
... position='after_conv3'),
... dict(cfg=dict(type='zzz', postfix='2'),
... stages=(True, True, True, True),
... position='after_conv3')
... ]
>>> self = ResNet(depth=18)
>>> stage_plugins = self.make_stage_plugins(plugins, 0)
>>> assert len(stage_plugins) == 3
Suppose 'stage_idx=0', the structure of blocks in the stage would be:
.. code-block::
.. code-block:: none
conv1-> conv2->conv3->yyy->zzz1->zzz2
Suppose 'stage_idx=1', the structure of blocks in the stage would be:
.. code-block::
.. code-block:: none
conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2
If stages is missing, the plugin would be applied to all stages.
Args:
Expand All @@ -471,7 +476,6 @@ def make_stage_plugins(self, plugins, stage_idx):
Returns:
list[dict]: Plugins for current stage
"""
stage_plugins = []
for plugin in plugins:
Expand Down Expand Up @@ -611,15 +615,13 @@ def train(self, mode=True):

@BACKBONES.register_module()
class ResNetV1d(ResNet):
"""ResNetV1d variant described in [1]_.
"""ResNetV1d variant described in
`Bag of Tricks <https://arxiv.org/pdf/1812.01187.pdf>`_.
Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv
in the input stem with three 3x3 convs. And in the downsampling block,
a 2x2 avg_pool with stride 2 is added before conv, whose stride is
changed to 1.
References:
.. [1] https://arxiv.org/pdf/1812.01187.pdf
"""

def __init__(self, **kwargs):
Expand Down
6 changes: 1 addition & 5 deletions mmdet/models/dense_heads/fcos_head.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,12 @@

@HEADS.register_module()
class FCOSHead(nn.Module):
"""
Fully Convolutional One-Stage Object Detection head from [1]_.
"""Anchor-free head used in `FCOS <https://arxiv.org/abs/1904.01355>`_.
The FCOS head does not use anchor boxes. Instead bounding boxes are
predicted at each pixel and a centerness measure is used to supress
low-quality predictions.
References:
.. [1] https://arxiv.org/abs/1904.01355
Example:
>>> self = FCOSHead(11, 7)
>>> feats = [torch.rand(1, 7, s, s) for s in [4, 8, 16, 32, 64]]
Expand Down
14 changes: 6 additions & 8 deletions mmdet/models/dense_heads/fsaf_head.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,12 @@

@HEADS.register_module()
class FSAFHead(RetinaHead):
"""FSAF anchor-free head used in [1].
"""Anchor-free head used in `FSAF <https://arxiv.org/abs/1903.00621>`_.
The head contains two subnetworks. The first classifies anchor boxes and
the second regresses deltas for the anchors (num_anchors is 1 for anchor-
free methods)
References:
.. [1] https://arxiv.org/pdf/1903.00621.pdf
Example:
>>> import torch
>>> self = FSAFHead(11, 7)
Expand Down Expand Up @@ -326,10 +323,11 @@ def reweight_loss_single(self, cls_loss, reg_loss, assigned_gt_inds,
Shape: (num_gts, ),
Returns:
cls_loss: Reduced corrected classification loss. Scalar.
reg_loss: Reduced corrected regression loss. Scalar.
pos_flags (Tensor): Corrected bool tensor indicating the final
postive anchors. Shape: (num_anchors, ).
tuple:
- cls_loss: Reduced corrected classification loss. Scalar.
- reg_loss: Reduced corrected regression loss. Scalar.
- pos_flags (Tensor): Corrected bool tensor indicating the
final postive anchors. Shape: (num_anchors, ).
"""
loc_weight = torch.ones_like(reg_loss)
cls_weight = torch.ones_like(cls_loss)
Expand Down
7 changes: 2 additions & 5 deletions mmdet/models/dense_heads/retina_head.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,12 @@

@HEADS.register_module()
class RetinaHead(AnchorHead):
"""
An anchor-based head used in [1]_.
"""An anchor-based head used in
`RetinaNet <https://arxiv.org/pdf/1708.02002.pdf>`_.
The head contains two subnetworks. The first classifies anchor boxes and
the second regresses deltas for the anchors.
References:
.. [1] https://arxiv.org/pdf/1708.02002.pdf
Example:
>>> import torch
>>> self = RetinaHead(11, 7)
Expand Down
15 changes: 6 additions & 9 deletions mmdet/models/detectors/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,17 +53,14 @@ def extract_feats(self, imgs):
def forward_train(self, imgs, img_metas, **kwargs):
"""
Args:
img (list[Tensor]): list of tensors of shape (1, C, H, W).
img (list[Tensor]): List of tensors of shape (1, C, H, W).
Typically these should be mean centered and std scaled.
img_metas (list[dict]): list of image info dict where each dict
has:
'img_shape', 'scale_factor', 'flip', and my also contain
img_metas (list[dict]): List of image info dict where each dict
has: 'img_shape', 'scale_factor', 'flip', and my also contain
'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
For details on the values of these keys see
`mmdet/datasets/pipelines/formatting.py:Collect`.
**kwargs: specific to concrete implementation
For details on the values of these keys, see
:class:`mmdet.datasets.pipelines.Collect`.
kwargs (keyword arguments): Specific to concrete implementation.
"""
pass

Expand Down
17 changes: 17 additions & 0 deletions mmdet/models/detectors/rpn.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,23 @@ def forward_train(self,
img_metas,
gt_bboxes=None,
gt_bboxes_ignore=None):
"""
Args:
img (Tensor): Input images of shape (N, C, H, W).
Typically these should be mean centered and std scaled.
img_metas (list[dict]): A List of image info dict where each dict
has: 'img_shape', 'scale_factor', 'flip', and may also contain
'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
For details on the values of these keys see
:class:`mmdet.datasets.pipelines.Collect`.
gt_bboxes (list[Tensor]): Each item are the truth boxes for each
image in [tl_x, tl_y, br_x, br_y] format.
gt_bboxes_ignore (None | list[Tensor]): Specify which bounding
boxes can be ignored when computing the loss.
Returns:
dict[str, Tensor]: A dictionary of loss components.
"""
if self.train_cfg.rpn.get('debug', False):
self.rpn_head.debug_imgs = tensor2imgs(img)

Expand Down
18 changes: 18 additions & 0 deletions mmdet/models/detectors/single_stage.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,24 @@ def forward_train(self,
gt_bboxes,
gt_labels,
gt_bboxes_ignore=None):
"""
Args:
img (Tensor): Input images of shape (N, C, H, W).
Typically these should be mean centered and std scaled.
img_metas (list[dict]): A List of image info dict where each dict
has: 'img_shape', 'scale_factor', 'flip', and may also contain
'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
For details on the values of these keys see
:class:`mmdet.datasets.pipelines.Collect`.
gt_bboxes (list[Tensor]): Each item are the truth boxes for each
image in [tl_x, tl_y, br_x, br_y] format.
gt_labels (list[Tensor]): Class indices corresponding to each box
gt_bboxes_ignore (None | list[Tensor]): Specify which bounding
boxes can be ignored when computing the loss.
Returns:
dict[str, Tensor]: A dictionary of loss components.
"""
x = self.extract_feat(img)
outs = self.bbox_head(x)
loss_inputs = outs + (gt_bboxes, gt_labels, img_metas)
Expand Down
2 changes: 1 addition & 1 deletion mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ class ConvFCBBoxHead(BBoxHead):
r"""More general bbox head, with shared conv and fc layers and two optional
separated branches.
.. code-block::
.. code-block:: none
/-> cls convs -> cls fcs -> cls
shared convs -> shared fcs
Expand Down
2 changes: 1 addition & 1 deletion mmdet/models/roi_heads/bbox_heads/double_bbox_head.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def forward(self, x):
class DoubleConvFCBBoxHead(BBoxHead):
r"""Bbox head used in Double-Head R-CNN
.. code-block::
.. code-block:: none
/-> cls
/-> shared convs ->
Expand Down
2 changes: 1 addition & 1 deletion mmdet/models/roi_heads/mask_heads/fused_semantic_head.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
class FusedSemanticHead(nn.Module):
r"""Multi-level fused semantic segmentation head.
.. code-block::
.. code-block:: none
in_1 -> 1x1 conv ---
|
Expand Down

0 comments on commit c77ccbb

Please sign in to comment.