Copilot Experience

Github Copilot Sharing

Introduction of Github Copilot (generate it by copilot)

GitHub Copilot is an AI pair programmer that helps you write code faster and with less work. GitHub Copilot draws context from the code you’re working on, suggesting whole lines or entire functions. It can write tests, too. GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI. OpenAI Codex is a descendant of GPT-3 and has been trained on a selection of English language and source code from publicly available sources, including code in public repositories on GitHub. GitHub Copilot works with a broad set of frameworks and languages, including Python, JavaScript, TypeScript, Ruby, and Go, and is powered by OpenAI Codex.

Setup

  • Create an github.com account if needed or login via your github.com account
  • Authorize via SSO
  • At that point, you’re not only in the organization, but onboarded for GitHub Copilot.
  • Go to https://docs.github.com/en/copilot/quickstart and follow the instructions to install the plug-in for your IDE.

Note:

  • You need to activate your GitHub account before using it. You can do this through the "Your GitHub launch code" email sent by GitHub.
  • Copilot support the following IDEs and versions:
* IntelliJ IDEA Community Edition (2021.2.1 or later)
* IntelliJ IDEA Ultimate Edition (2021.2.1 or later)
* Visual Studio Code (1.60.0 or later)
* Visual Studio (16.10 or later)
* Visual Studio for Mac (8.10 or later)
* Neovim (0.5.0 or later)
* Sublime Text (4.0 or later)
* ...

How to Use

Features

1 Auto prompt

When you are coding, if you want to write a method, Copilot will help you to generate method content.

Sample 1: Enter a meaningful method name

protected AppMeta getAppMet

Copilot will generate the corresponding code, and you can accept it by pressing the tab key.

protected AppMeta getAppMeta(TaskContext context) {
        String appName = context.getInputString(Constants.APPNAME);
        AppMeta meta = metaRepo.findById(appName).orElse(null);
        if (meta == null) {
        throw new RuntimeException(String.format("Cannot find app meta information for %s", appName));
        }
        return meta;
        }

Sample 2: Let Copilot generate code through code comments
Loop through the list and query the completion status of the job

// Traverse uncompletedItr for up to 600s, with an interval of 100s between each traversal. If there are still unfinished jobs after traversing, uncompletedJobIds will be returned.
// If the job is completed, remove it from uncompletedJobIds.
// If the job fails, return uncompletedJobIds directly.

The generated code:

// Traverse uncompletedItr for up to 600s, with an interval of 100s between each traversal. If there are still unfinished jobs after traversing, uncompletedJobIds will be returned.
// If the job is completed, remove it from uncompletedJobIds.
// If the job fails, return uncompletedJobIds directly.
        while (curTime <= 600){
                try {
                TimeUnit.SECONDS.sleep(100);
                curTime += 100;
                while (uncompletedItr.hasNext()) {
                String statusLink = uncompletedItr.next();
                Map<String, Object> result = zebraClient.getZebraRequestInfo(statusLink);
        String status = (String) result.get("executionStatus");
        if (STATUS_COMPLETED.equals(status)) {
        uncompletedItr.remove();
        } else if (STATUS_FAILED.equals(status)) {
        context.error("Job %s failed", statusLink);
        return uncompletedJobIds;
        }
        }
        } catch (InterruptedException ignored) {
        } catch (RuntimeException e){
        context.warn(e.getMessage());
        }
        if(uncompletedJobIds.isEmpty()) {
        break;
        }
        }

2 More suggestions/completions

If the prompt result is not what you want, you can display more suggestions by configuring the shortcut key or clicking refresh on the GitHub Copilot tab on the right side of Idea.

//Generate a method for generating random numbers and return the result
public static int generateRandomNu

By clicking refresh in the github copilot tab, more suggestions will be prompted

// solution 1
public static int generateRandomNumber() {
        Random rand = new Random();
        int randomNum = rand.nextInt(1000000);
        return randomNum;
        }
// solution 2
public static int generateRandomNumber() {
        Random rand = new Random();
        return rand.nextInt(1000);
        }

// solution 3
public static int generateRandomNumber() {
        Random random = new Random();
        return random.nextInt();
        }

// solution 4
public static int generateRandomNumber() {
        Random random = new Random();
        return random.nextInt(100);
        }

public static void main(String[] args) {
        System.out.println("Generate Random Number: " + generateRandomNumber());
        }

// solution 5
public static int generateRandomNumber() {
        Random random = new Random();
        //Generate a random number between 0 and 100
        int result = random.nextInt(100);
        return result;
        }

You can also quickly open it through shortcut keys, the default is opt + return. If there is a conflict, you need to configure the corresponding github copilot shortcut key yourself. The specific configuration method is at the end of the article.

3 Github Copilot Chat

Open Github Copilot Chat through shortcut keys or the tab on the right side of Idea, enter relevant information, and get the answer. It can be used as chatGPT. The disadvantage is that it will not associate the context.
For example, we can ask Copilot the following questions.

//Generate a method for generating random numbers and return the result

The answer generated by Copilot is as follows:

import java.util.Random;

public class NumberGenerator {
    public static int generateRandomNumber() {
        Random random = new Random();
        return random.nextInt(100); // This will return a random number between 0 (inclusive) and 100 (exclusive)
    }
}

Application scenes at work

1. Code Completion

Refer to the above section.

2. Refactor Code

For some complex logic and heavily coupled code, it can be refactored through Copilot. Refactor the following code, it does too much this in one method

package com.garlic.app.tasks.adoptionDRP;

import com.garlic.app.models.drpadoption.ProdPoolMetadataInfoModel;
import org.elasticsearch.common.Strings;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.stereotype.Component;

import java.util.List;
import java.util.Objects;
import java.util.stream.Collectors;
import java.util.stream.Stream;

@Component
public class AutoCrexateDRPExecutorFactory {
    @Autowired
    List<AbstractAutoCreateDRPExecutor> executorList;

    private static final Logger LOGGER = LoggerFactory.getLogger(AutoCreateDRPExecutorFactory.class);

    @Autowired
    private ApplicationContext appContext;
    public List<AbstractAutoCreateDRPExecutor> getExecutor(List<ProdPoolMetadataInfoModel> prodPoolMetadataInfoModels,final Boolean repeat){
        return prodPoolMetadataInfoModels
                .stream()
                .filter(e -> {
                    if(Strings.isNullOrEmpty(e.getStgPoolId())){
                        LOGGER.info(String.format("App %s prod pool %s do not have staging pool info",e.getAppName(),e.getResourceId()));
                        return false;
                    }
                    return true;
                })
                .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getAppName)).entrySet().stream()
                .map(entry -> entry.getValue()
                        .stream()
//                            .filter(prodPoolMetadataInfoModel -> !prodPoolMetadataInfoModel.isProdPoolE2EPipeline())
                        .filter(prodPoolMetadataInfoModel -> {
                            if(!repeat && prodPoolMetadataInfoModel.isProdPoolE2EPipeline()){
                                LOGGER.error(String.format("App %s prod pool %s exists E2E pipeline already.",prodPoolMetadataInfoModel.getAppName(),prodPoolMetadataInfoModel.getResourceId()));
                            }
                            return  repeat ? repeat : !prodPoolMetadataInfoModel.isProdPoolE2EPipeline();
                        })
                        .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getCommitId))
                        .entrySet().stream().map(e -> {
                            for (int i = 0; i < executorList.size(); i++) {
                                if (executorList.get(i).checkExecutorType(e.getValue())) {
                                    return executorList.get(i).getBean(appContext, e.getValue(), entry.getKey());
                                }
                            }
                            LOGGER.error(String.format("Create autoCreateDRPExecutor failed辫呻,appName %s commit id %s", entry.getKey(), e.getKey()));
                            return null;
                        }).filter(Objects::nonNull).collect(Collectors.toList())).flatMap(Collection::stream).collect(Collectors.toList());
    }
}

The generated result is as follows:

package com.garlic.app.tasks.adoptionDRP;

import com.garlic.app.models.drpadoption.ProdPoolMetadataInfoModel;
import org.elasticsearch.common.Strings;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.stereotype.Component;

import java.util.List;
import java.util.Objects;
import java.util.stream.Collectors;
import java.util.stream.Stream;

@Component
public class AutoCreateDRPExecutorFactory {
    @Autowired
    List<AbstractAutoCreateDRPExecutor> executorList;

    private static final Logger LOGGER = LoggerFactory.getLogger(AutoCreateDRPExecutorFactory.class);

    @Autowired
    private ApplicationContext appContext;
    public List<AbstractAutoCreateDRPExecutor> getExecutor(List<ProdPoolMetadataInfoModel> prodPoolMetadataInfoModels, final Boolean repeat) {
        return prodPoolMetadataInfoModels.stream()
                .filter(this::hasStagingPoolInfo)
                .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getAppName))
                .values()
                .stream()
                .flatMap(modelsForApp -> this.createExecutorsForApp(modelsForApp, repeat))
                .collect(Collectors.toList());
    }

    private boolean hasStagingPoolInfo(ProdPoolMetadataInfoModel model) {
        if (Strings.isNullOrEmpty(model.getStgPoolId())) {
            LOGGER.info(String.format("App %s prod pool %s do not have staging pool info", model.getAppName(), model.getResourceId()));
            return false;
        }
        return true;
    }

    private Stream<AbstractAutoCreateDRPExecutor> createExecutorsForApp(List<ProdPoolMetadataInfoModel> modelsForApp, Boolean repeat) {
        return modelsForApp.stream()
                .filter(model -> shouldCreateExecutor(model, repeat))
                .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getCommitId))
                .values()
                .stream()
                .map(this::createExecutorForCommit)
                .filter(Objects::nonNull);
    }


    private boolean shouldCreateExecutor(ProdPoolMetadataInfoModel model, Boolean repeat) {
        if (!repeat && model.isProdPoolE2EPipeline()) {
            LOGGER.error(String.format("App %s prod pool %s exists E2E pipeline already.", model.getAppName(), model.getResourceId()));
            return false;
        }
        return repeat || !model.isProdPoolE2EPipeline();
    }

    private AbstractAutoCreateDRPExecutor createExecutorForCommit(List<ProdPoolMetadataInfoModel> modelsForCommit) {
        for (AbstractAutoCreateDRPExecutor executor : executorList) {
            if (executor.checkExecutorType(modelsForCommit)) {
                return executor.getBean(appContext, modelsForCommit, modelsForCommit.get(0).getAppName());
            }
        }
        LOGGER.error(String.format("Create autoCreateDRPExecutor failed缚甩,appName %s commit id %s", modelsForCommit.get(0).getAppName(), modelsForCommit.get(0).getCommitId()));
        return null;
    }
}

We can see that the code is more readable and the logic is clearer.

3. Generate Test Case

Copilot can help you generate test cases. For example, we can generate test cases for the following code.

package com.garlic.cntrpdlc.migration.task;

import java.util.Random;


public class NumberGenerator {
    // Generate a method for generating random numbers
    public static void NumberGenerator() {
        Random random = new Random();
        int number = random.nextInt(100);
        System.out.println(number);
    }
}

We can enter the following command in the Copilot chat window, Copilot will help us generate the test case.

/tests NumberGenerator

The generated result is as follows:

import com.garlic.cntrpdlc.migration.task.NumberGenerator;
import org.junit.jupiter.api.Test;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import static org.junit.jupiter.api.Assertions.assertTrue;

public class NumberGeneratorTest {

    @Test
    public void numberGeneratorOutputsRandomNumber() {
        ByteArrayOutputStream outContent = new ByteArrayOutputStream();
        System.setOut(new PrintStream(outContent));

        NumberGenerator.NumberGenerator();

        String output = outContent.toString();
        assertTrue(output.matches("\\d{1,2}\\r?\\n"), "Output should be a number between 0 and 99");
    }
}

4. Simplify

In some cases, Copilot can help simplify the code.

package com.garlic.cntrpdlc.migration.task;

import java.util.Random;


public class NumberGenerator {
  //Generate a method for generating random numbers and return the result
    public static void NumberGenerator() {
        Random random = new Random();
        int number = random.nextInt(100);
        System.out.println(number);
    }
}

After simplify:

package com.garlic.cntrpdlc.migration.task;

import java.util.Random;

public class NumberGenerator {
  public static void generateRandomNumber() {
    System.out.println(new Random().nextInt(100));
  }
}

Copilot can also help simplify the code in other ways, such as simplifying the logic of the code, simplifying the code structure, etc.

5. Fix code issue

GitHub Copilot can assist in fixing code by providing suggestions for correcting syntax errors, improving code structure, or optimizing code performance. When you write code and encounter an issue, GitHub Copilot can provide a suggestion to fix the problem based on the context of the code. It can also suggest alternative approaches to a problem or improvements to the code. However, it's important to note that while GitHub Copilot can provide suggestions, it's up to the developer to review and decide whether to apply the suggested fixes.

  public int generateRandomNumber() {
        Random random = new Random();
        int randomNumber = random.nextInt(100);
        System.out.println(randomNumber/0);
        return randomNumber;
        }

In this case, GitHub Copilot might suggest the following fix:

public int generateRandomNumber() {
        Random random = new Random();
        int randomNumber = random.nextInt(100);
        System.out.println(randomNumber); // Removed division by zero
        return randomNumber;
        }

6. Debugging

Copilot can also help you debug code. For example, if you want to debug the following code, you can enter the following command in the Copilot chat window.

debug FlowManager.FlowManager method

The generated result is as follows:


The FlowManager.FlowManager method in the FlowManager class is a constructor that is responsible for loading task flows from YAML files located in the classpath under the /jobs directory. It uses the PathMatchingResourcePatternResolver to find these files and then deserializes them into TaskFlow objects using the ModelHelper.YMAPPER.readValue method. These TaskFlow objects are then stored in the flowMap for later use.

If you're experiencing issues with this method, here are a few things you might want to check:

  1. Ensure that the YAML files are correctly placed in the classpath under the /jobs directory.
  2. Check the format of the YAML files. They should be correctly formatted and should be able to be deserialized into TaskFlow objects.
  3. Check for IOExceptions. If there's an issue with reading the files, an IOException will be thrown.

To debug this method, you can add logging statements to provide more information about what the method is doing. Here's an example of how you might modify the method to include some additional logging:

public FlowManager() {
    ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver(FlowManager.class.getClassLoader());
    try {
        Resource[] resources = resolver.getResources("classpath:/jobs/**/*.yaml");
        LOGGER.info("Found " + resources.length + " resources");
        for (Resource r : resources) {
            LOGGER.info("Processing resource: " + r.getFilename());
            TaskFlow taskFlow = ModelHelper.YMAPPER.readValue(r.getInputStream(), TaskFlow.class);
            flowMap.put(taskFlow.getName(), taskFlow);
            LOGGER.info("Added task flow: " + taskFlow.getName());
        }
    } catch (IOException e) {
        LOGGER.error("Cannot autowire the jobs from classpath", e);
        throw new InitializationException("Cannot autowire the jobs from classpath", e);
    }
}

This will log the number of resources found, the name of each resource being processed, and the name of each task flow added to the map. If an exception is thrown, it will be logged as well. This should give you a better idea of what's happening when this method is called.


7. Generate document for java code

GitHub Copilot can help generate documentation for Java code. It can provide suggestions for JavaDoc comments for classes, methods, and fields based on their names and context. Here's an example:

/doc AutoCreateDRPExecutorFactory
package com.garlic.app.tasks.adoptionDRP;

import com.garlic.app.models.drpadoption.ProdPoolMetadataInfoModel;
import org.elasticsearch.common.Strings;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.stereotype.Component;

import java.util.List;
import java.util.Objects;
import java.util.stream.Collectors;
import java.util.stream.Stream;

/**
 * This class is responsible for creating executors for the AutoCreateDRP process.
 * It uses the Spring Boot framework and is annotated as a Component, meaning it is a singleton and can be autowired.
 */
@Component
public class AutoCreateDRPExecutorFactory {
    // List of executors to be used in the creation process
    @Autowired
    List<AbstractAutoCreateDRPExecutor> executorList;

    // Logger for this class
    private static final Logger LOGGER = LoggerFactory.getLogger(AutoCreateDRPExecutorFactory.class);

    // Spring's ApplicationContext, used to get beans
    @Autowired
    private ApplicationContext appContext;

    /**
     * This method returns a list of executors for the given product pool metadata models.
     * It filters out models without staging pool info and groups them by app name.
     * Then, it creates executors for each app.
     *
     * @param prodPoolMetadataInfoModels List of product pool metadata models
     * @param repeat Boolean flag indicating whether to repeat the process
     * @return List of executors
     */
    public List<AbstractAutoCreateDRPExecutor> getExecutor(List<ProdPoolMetadataInfoModel> prodPoolMetadataInfoModels, final Boolean repeat) {
        return prodPoolMetadataInfoModels.stream()
                .filter(this::hasStagingPoolInfo)
                .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getAppName))
                .values()
                .stream()
                .flatMap(modelsForApp -> this.createExecutorsForApp(modelsForApp, repeat))
                .collect(Collectors.toList());
    }

    /**
     * This method checks if a model has staging pool info.
     *
     * @param model The product pool metadata model
     * @return true if the model has staging pool info, false otherwise
     */
    private boolean hasStagingPoolInfo(ProdPoolMetadataInfoModel model) {
        if (Strings.isNullOrEmpty(model.getStgPoolId())) {
            LOGGER.info(String.format("App %s prod pool %s do not have staging pool info", model.getAppName(), model.getResourceId()));
            return false;
        }
        return true;
    }

    /**
     * This method creates executors for a given app.
     * It filters out models that should not have an executor created and groups them by commit id.
     * Then, it creates an executor for each commit.
     *
     * @param modelsForApp List of product pool metadata models for an app
     * @param repeat Boolean flag indicating whether to repeat the process
     * @return Stream of executors
     */
    private Stream<AbstractAutoCreateDRPExecutor> createExecutorsForApp(List<ProdPoolMetadataInfoModel> modelsForApp, Boolean repeat) {
        return modelsForApp.stream()
                .filter(model -> shouldCreateExecutor(model, repeat))
                .collect(Collectors.groupingBy(ProdPoolMetadataInfoModel::getCommitId))
                .values()
                .stream()
                .map(this::createExecutorForCommit)
                .filter(Objects::nonNull);
    }

    /**
     * This method checks if an executor should be created for a model.
     *
     * @param model The product pool metadata model
     * @param repeat Boolean flag indicating whether to repeat the process
     * @return true if an executor should be created, false otherwise
     */
    private boolean shouldCreateExecutor(ProdPoolMetadataInfoModel model, Boolean repeat) {
        if (!repeat && model.isProdPoolE2EPipeline()) {
            LOGGER.error(String.format("App %s prod pool %s exists E2E pipeline already.", model.getAppName(), model.getResourceId()));
            return false;
        }
        return repeat || !model.isProdPoolE2EPipeline();
    }

    /**
     * This method creates an executor for a given commit.
     * It iterates over the list of executors and checks if the executor type matches the models for the commit.
     * If a match is found, it returns the bean for that executor.
     * If no match is found, it logs an error and returns null.
     *
     * @param modelsForCommit List of product pool metadata models for a commit
     * @return An executor if a match is found, null otherwise
     */
    private AbstractAutoCreateDRPExecutor createExecutorForCommit(List<ProdPoolMetadataInfoModel> modelsForCommit) {
        for (AbstractAutoCreateDRPExecutor executor : executorList) {
            if (executor.checkExecutorType(modelsForCommit)) {
                return executor.getBean(appContext, modelsForCommit, modelsForCommit.get(0).getAppName());
            }
        }
        LOGGER.error(String.format("Create autoCreateDRPExecutor failed屎债,appName %s commit id %s", modelsForCommit.get(0).getAppName(), modelsForCommit.get(0).getCommitId()));
        return null;
    }
}

The document of above code is generated by Copilot.

8. Explain code logic

Enter in Github Copilot Chat, Copilot will help you explain the logic of the code.

/explain findAccessPoints

For example, we can let copilot explain the logic of the following code.

 public List<String> findAccessPoints(String poolId, TessEnv env, String user){
        List<String> accessPoints = new ArrayList<>();
        Map<String, Object> nsvcs = tessClient.queryFcpObject(NSVC_API_VERSION, getNamespaceByPoolId(poolId, env),
                NSVC_RESOURCE, null , env, user);
        Optional.ofNullable(nsvcs).flatMap(r -> Optional.ofNullable(nsvcs.get("items"))).ifPresent(items -> {
            ((List<Map<String, Object>>) items).stream().forEach(i -> {
                Optional.ofNullable(i.get("metadata")).ifPresent(meta -> {
                    String nsvc = String.valueOf(((Map<String, Object>) meta).get("name"));
                    if(nsvc.contains("garlic.com")){
                        accessPoints.add(nsvc);
                    }

                });
            });
        });
       return accessPoints;
    }

Explain the logic of the code, the generated result is as follows.

The `findAccessPoints` method is part of a Java class. This method is designed to find and return a list of access points for a given pool ID, Tess environment, and user.

The method accepts three parameters: `poolId`, `env`, and `user`. `poolId` is a string representing the ID of the pool for which to find access points. `env` is an instance of `TessEnv`, representing the Tess environment. `user` is a string representing the user.

The method first initializes an empty ArrayList named `accessPoints`. This list will be used to store the found access points.

Next, the method calls the `queryFcpObject` method on a `tessClient` object, passing in several parameters including the API version, namespace, resource, environment, and user. This method returns a Map, with keys as Strings and values as Objects, which is stored in the `nsvcs` variable.

Then, the method uses Java 8's Optional and Stream APIs to process the `nsvcs` map. It first checks if the `nsvcs` map is not null and contains a key named "items". If it does, it casts the value associated with the "items" key to a List of Maps, where the keys are Strings and the values are Objects. It then iterates over each item in this list.

For each item, the method checks if it contains a key named "metadata". If it does, it retrieves the value associated with the "name" key from the metadata map and stores it in the `nsvc` variable. If the `nsvc` string contains "garlic.com", it is added to the `accessPoints` list.

Finally, the method returns the `accessPoints` list. This list contains the access points found for the given pool ID, Tess environment, and user.

9. Refinement

Github copilot can not only refine code, but also support refine article paragraphs.

There are many issues occurred during the migration process, which will block  
the CPDLC migration. For these known issues, we summarize the following  
pattern to help detect and solve problems in advance to make the whole process more efficient.

Refine result:

The migration process encounters numerous issues that can hinder the progress  
of CPDLC migration. We've identified and summarized these known issues into  
a pattern to facilitate early detection and resolution,  
thereby enhancing the overall efficiency of the process.

10. Question and Answer

Enter in GitHub Copilot Chat or comments, you can get the answer from Copilot. For example, we can ask Copilot the following questions.

Command to get the log information of the app container in the pod of the namespace tess-deev in k8s

The answer generated by Copilot is as follows:

kubectl logs -n <namespace> <pod_name> -c <container_name>

e.g. kubectl logs -n tess-deev my-pod -c my-container

11. Document Completion

11.1 Generate a table of contents in markdown

For example, the table of contents of this document is generated by Copilot. It is more convenient to generate the table of contents through Copilot.

11.2 Text Prompt

For example, the following is an overview of the readme of the Container PDLC migration tool. After I deleted it, copilot will automatically generate the overview content.

Old overview:

Automation flow currently supports to migrate C3 pools to container based workloads in automation way. The automation flow support to add manual phases and can continue to process if manual phase status changed to complete. 

New overview:

Automation Flow is a workflow engine to automate the migration process of ContainerPDLC. It is built on top of [Conductor](https://netflix.github.io/conductor/) and [Spring Boot](https://spring.io/projects/spring-boot). It provides a set of APIs to trigger the migration flow and check the status of the migration. It also provides a web portal to trigger the migration flow and check the status of the migration.

The overview as above of the readme of the Container PDLC migration tool is generated by Copilot.

Shortcut key

Configure shortcut keys

  • Settings → Keymap → plugins → Github Copilot

Default shortcut keys

  • Code Completion: opt + \
  • Pre Completion: opt + [
  • Next Completion: opt + ]
  • Show Top 10 Completions: opt + return
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市拜英,隨后出現(xiàn)的幾起案子擂涛,更是在濱河造成了極大的恐慌,老刑警劉巖聊记,帶你破解...
    沈念sama閱讀 218,941評(píng)論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件撒妈,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡排监,警方通過(guò)查閱死者的電腦和手機(jī)狰右,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,397評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)舆床,“玉大人棋蚌,你說(shuō)我怎么就攤上這事“ざ樱” “怎么了谷暮?”我有些...
    開(kāi)封第一講書人閱讀 165,345評(píng)論 0 356
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)盛垦。 經(jīng)常有香客問(wèn)我湿弦,道長(zhǎng),這世上最難降的妖魔是什么腾夯? 我笑而不...
    開(kāi)封第一講書人閱讀 58,851評(píng)論 1 295
  • 正文 為了忘掉前任颊埃,我火速辦了婚禮蔬充,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘班利。我一直安慰自己饥漫,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,868評(píng)論 6 392
  • 文/花漫 我一把揭開(kāi)白布罗标。 她就那樣靜靜地躺著庸队,像睡著了一般。 火紅的嫁衣襯著肌膚如雪闯割。 梳的紋絲不亂的頭發(fā)上皿哨,一...
    開(kāi)封第一講書人閱讀 51,688評(píng)論 1 305
  • 那天,我揣著相機(jī)與錄音纽谒,去河邊找鬼。 笑死如输,一個(gè)胖子當(dāng)著我的面吹牛鼓黔,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播不见,決...
    沈念sama閱讀 40,414評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼澳化,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了稳吮?” 一聲冷哼從身側(cè)響起缎谷,我...
    開(kāi)封第一講書人閱讀 39,319評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎灶似,沒(méi)想到半個(gè)月后列林,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,775評(píng)論 1 315
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡酪惭,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,945評(píng)論 3 336
  • 正文 我和宋清朗相戀三年希痴,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片春感。...
    茶點(diǎn)故事閱讀 40,096評(píng)論 1 350
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡砌创,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出鲫懒,到底是詐尸還是另有隱情嫩实,我是刑警寧澤,帶...
    沈念sama閱讀 35,789評(píng)論 5 346
  • 正文 年R本政府宣布窥岩,位于F島的核電站甲献,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏颂翼。R本人自食惡果不足惜竟纳,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,437評(píng)論 3 331
  • 文/蒙蒙 一撵溃、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧锥累,春花似錦缘挑、人聲如沸。這莊子的主人今日做“春日...
    開(kāi)封第一講書人閱讀 31,993評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至际歼,卻和暖如春惶翻,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背鹅心。 一陣腳步聲響...
    開(kāi)封第一講書人閱讀 33,107評(píng)論 1 271
  • 我被黑心中介騙來(lái)泰國(guó)打工吕粗, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人旭愧。 一個(gè)月前我還...
    沈念sama閱讀 48,308評(píng)論 3 372
  • 正文 我出身青樓颅筋,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親输枯。 傳聞我的和親對(duì)象是個(gè)殘疾皇子议泵,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,037評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容