Whether it’s working on a project, solving a difficult problem, or even refining soft skills like communication, the act of showing up and putting in the hours is essential. Practice makes perfect, but more so it’s all about progress rather than perfection. Each hour you spend iterating, refining, failing and retrying brings you closer to excellence. It doesn’t always feel that way in the moment but when you look back at what you did before, you will see your progress. And that act of looking back, and seeing how you improved, is immensely rewarding and in turn makes you enjoy your work.
As a junior engineer, there’s simply no substitute for getting the first 100K lines of code under your belt. The “start over each day” method will help get you to those 100K lines faster.You might think covering the same ground multiple times isn’t as valuable as getting 100K diverse lines of code. I disagree. Solving the same problem repeatedly is actually really beneficial for retaining knowledge of patterns you figure out.You only need 5K perfect lines to see all the major patterns once. The other 95K lines are repetition to rewire your neurons.
2025-01-29 14:17:25.266794 [E:onnxruntime:, sequential_executor.cc:516 ExecuteKernel] Non-zero status code returned while running Reshape node. Name:'/transformer_encoder/layers.0/self_attn/Reshape_4' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:47 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, onnxruntime::TensorShapeVector &, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{11,2,512}, requested shape:{10,16,64}
Traceback (most recent call last): File "/Users/ws/export.py", line 63, in <module> outputs = ort_session.run( ^^^^^^^^^^^^^^^^ File "/Users/ws/miniforge3/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 266, in run return self._sess.run(output_names, input_feed, run_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'/transformer_encoder/layers.0/self_attn/Reshape_4' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:47 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, onnxruntime::TensorShapeVector &, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{11,2,512}, requested shape:{10,16,64}
@lru_cache() defget_available_devices() -> FrozenSet[str]: """ Returns a frozenset of devices available for the current PyTorch installation. """ devices = {"cpu"} # `cpu` is always supported as a device in PyTorch
# LRU装饰器具体实现函数 def_lru_cache_wrapper(user_function, maxsize, typed, _CacheInfo): # Constants shared by all lru cache instances: # 每个object()得到的ID都是唯一的 sentinel = object() # unique object used to signal cache misses make_key = _make_key # build a key from the function arguments PREV, NEXT, KEY, RESULT = 0, 1, 2, 3# names for the link fields
cache = {} hits = misses = 0 full = False cache_get = cache.get # bound method to lookup a key or return None cache_len = cache.__len__ # get cache size without calling len() lock = RLock() # because linkedlist updates aren't threadsafe root = [] # root of the circular doubly linked list root[:] = [root, root, None, None] # initialize by pointing to self
if maxsize == 0:
defwrapper(*args, **kwds): # maxsize=0,说明无缓存,直接调用用户函数并返回结果 nonlocal misses misses += 1 result = user_function(*args, **kwds) return result
elif maxsize isNone:
defwrapper(*args, **kwds): # 无限缓存情况,不用考虑LRU替换,直接匹配 nonlocal hits, misses # 生成包含args, kwds和typed的唯一的key key = make_key(args, kwds, typed) result = cache_get(key, sentinel) if result isnot sentinel: # 找到了key,说明已经有缓存了 hits += 1 return result misses += 1 result = user_function(*args, **kwds) # 将本次结果进行缓存 cache[key] = result return result
else:
# 有缓存大小的情况,需要进行LRU替换 defwrapper(*args, **kwds): # Size limited caching that tracks accesses by recency nonlocal root, hits, misses, full key = make_key(args, kwds, typed) with lock: link = cache_get(key) if link isnotNone: # 使用双向链表结构体 # 在链表中删除命中的节点 link_prev, link_next, _key, result = link link_prev[NEXT] = link_next link_next[PREV] = link_prev # 将命中的节点移动到最后位置,root为最开始位置,表示最旧没用的数据,而last表示最新使用的数据 last = root[PREV] last[NEXT] = root[PREV] = link link[PREV] = last link[NEXT] = root hits += 1 return result misses += 1 result = user_function(*args, **kwds) #处理没命中的情况,因为如果命中的话,前面已经return了 with lock: if key in cache: # 这种情况说明别的线程写入了key,由于节点已经移动到最开始位置了,这里不需要做操作,只需要确保结果最后会返回 pass elif full: # 缓存已满,需要LRU替换 # 使用要删除的节点保存新插入的数据,避免额外的内存申请 oldroot = root oldroot[KEY] = key oldroot[RESULT] = result
root = oldroot[NEXT] oldkey = root[KEY] oldresult = root[RESULT] root[KEY] = root[RESULT] = None # Now update the cache dictionary. del cache[oldkey] # Save the potentially reentrant cache[key] assignment # for last, after the root and links have been put in # a consistent state. cache[key] = oldroot else: # 没满的时候,直接插入link到最后 last = root[PREV] link = [last, root, key, result] last[NEXT] = root[PREV] = cache[key] = link # 检查缓存是否满了,使用cache_len而不是len(),因为len()可能会被lru_cache缓存,但属性不会,cache_len = cache.__len__ full = (cache_len() >= maxsize) return result
response = client.chat.completions.create( messages=[ { "role": "system", "content": "You are a helpful assistant.", }, { "role": "user", "content": "What is the capital of France?", }, { "role": "assistant", "content": "The capital of France is Paris.", }, { "role": "user", "content": "What about Spain?", } ], model=model_name, )
response = client.chat.completions.create( messages=[ { "role": "system", "content": "You are a helpful assistant.", }, { "role": "user", "content": "Give me 5 good reasons why I should exercise every day.", } ], model=model_name, stream=True )
for update in response: if update.choices[0].delta.content: print(update.choices[0].delta.content, end="")
defget_image_data_url(image_file: str, image_format: str) -> str: """ Helper function to converts an image file to a data URL string. Args: image_file (str): The path to the image file. image_format (str): The format of the image file. Returns: str: The data URL of the image. """ try: withopen(image_file, "rb") as f: image_data = base64.b64encode(f.read()).decode("utf-8") except FileNotFoundError: print(f"Could not read '{image_file}'.") exit() returnf"data:image/{image_format};base64,{image_data}"
# Define a function that returns flight information between two cities (mock implementation) defget_flight_info(origin_city: str, destination_city: str): if origin_city == "Seattle"and destination_city == "Miami": return json.dumps({ "airline": "Delta", "flight_number": "DL123", "flight_date": "May 7th, 2024", "flight_time": "10:00AM"}) return json.dumps({"error": "No flights found between the cities"})
# Define a function tool that the model can ask to invoke in order to retrieve flight information tool={ "type": "function", "function": { "name": "get_flight_info", "description": """Returns information about the next flight between two cities. This includes the name of the airline, flight number and the date and time of the next flight""", "parameters": { "type": "object", "properties": { "origin_city": { "type": "string", "description": "The name of the city where the flight originates", }, "destination_city": { "type": "string", "description": "The flight destination city", }, }, "required": [ "origin_city", "destination_city" ], }, }, }
messages=[ {"role": "system", "content": "You an assistant that helps users find flight information."}, {"role": "user", "content": "I'm interested in going to Miami. What is the next flight there from Seattle?"}, ]
# We expect the tool to be a function call if tool_call.type == "function":
# Parse the function call arguments and call the function function_args = json.loads(tool_call.function.arguments.replace("'", '"')) print(f"Calling function `{tool_call.function.name}` with arguments {function_args}") callable_func = locals()[tool_call.function.name] function_return = callable_func(**function_args) print(f"Function returned = {function_return}")
# Append the function call result fo the chat history messages.append( { "tool_call_id": tool_call.id, "role": "tool", "name": tool_call.function.name, "content": function_return, } )
# Get another response from the model response = client.chat.completions.create( messages=messages, tools=[tool], model=model_name, )
import os from azure.ai.inference import ChatCompletionsClient from azure.ai.inference.models import SystemMessage, UserMessage from azure.core.credentials import AzureKeyCredential
response = client.complete( messages=[ SystemMessage(content="You are a helpful assistant."), UserMessage(content="What is the capital of France?"), ], model=model_name, temperature=1.0, max_tokens=1000, top_p=1.0 )
import os from azure.ai.inference import ChatCompletionsClient from azure.ai.inference.models import AssistantMessage, SystemMessage, UserMessage from azure.core.credentials import AzureKeyCredential
messages = [ SystemMessage(content="You are a helpful assistant."), UserMessage(content="What is the capital of France?"), AssistantMessage(content="The capital of France is Paris."), UserMessage(content="What about Spain?"), ]
import os from azure.ai.inference import ChatCompletionsClient from azure.ai.inference.models import SystemMessage, UserMessage from azure.core.credentials import AzureKeyCredential
response = client.complete( stream=True, messages=[ SystemMessage(content="You are a helpful assistant."), UserMessage(content="Give me 5 good reasons why I should exercise every day."), ], model=model_name, )
for update in response: if update.choices: print(update.choices[0].delta.content or"", end="")
# Define a function that returns flight information between two cities (mock implementation) defget_flight_info(origin_city: str, destination_city: str): if origin_city == "Seattle"and destination_city == "Miami": return json.dumps({ "airline": "Delta", "flight_number": "DL123", "flight_date": "May 7th, 2024", "flight_time": "10:00AM"}) return json.dumps({"error": "No flights found between the cities"})
# Define a function tool that the model can ask to invoke in order to retrieve flight information flight_info = ChatCompletionsToolDefinition( function=FunctionDefinition( name="get_flight_info", description="""Returns information about the next flight between two cities. This includes the name of the airline, flight number and the date and time of the next flight""", parameters={ "type": "object", "properties": { "origin_city": { "type": "string", "description": "The name of the city where the flight originates", }, "destination_city": { "type": "string", "description": "The flight destination city", }, }, "required": ["origin_city", "destination_city"], }, ) )
messages = [ SystemMessage(content="You an assistant that helps users find flight information."), UserMessage(content="I'm interested in going to Miami. What is the next flight there from Seattle?"), ]
# We expect the tool to be a function call ifisinstance(tool_call, ChatCompletionsToolCall):
# Parse the function call arguments and call the function function_args = json.loads(tool_call.function.arguments.replace("'", '"')) print(f"Calling function `{tool_call.function.name}` with arguments {function_args}") callable_func = locals()[tool_call.function.name] function_return = callable_func(**function_args) print(f"Function returned = {function_return}")
# Append the function call result fo the chat history messages.append(ToolMessage(tool_call_id=tool_call.id, content=function_return))
# Get another response from the model response = client.complete( messages=messages, tools=[flight_info], model=model_name, )
# pip install -U google-generativeai import google.generativeai as genai import os import PIL.Image
# obtain your key at https://aistudio.google.com/ genai.configure(api_key=os.environ["GOOGLE_API_KEY"]) model = genai.GenerativeModel('gemini-1.0-pro-latest') response = model.generate_content("讲一个鬼故事") print(response.text)
prompt如下:’Return bounding boxes of the , in the format of [ymin, xmin, ymax, xmax]
1 2 3 4 5 6 7 8 9 10 11 12 13 14
# pip install -U google-generativeai import google.generativeai as genai import os import PIL.Image
# obtain your key at https://aistudio.google.com/ genai.configure(api_key=os.environ["GOOGLE_API_KEY"]) model = genai.GenerativeModel('gemini-1.5-pro-latest')
# output bbox img = PIL.Image.open("dancer.jpg") prompt = 'Return bounding boxes of the dancer, in the format of [ymin, xmin, ymax, xmax]' response = model.generate_content([img, prompt]) print(response.text)
# mimic_head Unofficial One-click Version of LivePortrait, with Webcam Support ## Features + with webcam, video and single image support + with cpu, mps and cuda backend support, you can run it without Nvidia GPU! ## Screenshot + Image mode:  + Video mode: https://github.com/user-attachments/assets/1aef9ae6-7d05-4fea-a03c-2c3de76df8b1 + Webcam mode: NOTE: FPS ~= 13 on my mac laptop and there is observable delay in this video https://github.com/user-attachments/assets/6a2ce4c5-e3f2-40cd-9fe9-c081407aaca1 ## Install and use ```bash pip install mimic_head mimic_head run ```
# 🎭 mimic_head    🚀 **Unofficial One-click Version of LivePortrait** with Webcam Support! ## 🌟 Features - 📷 **Webcam, Video, and Single Image Support**: - Easily switch between different input modes to suit your needs. - 🖥️ **CPU, MPS, and CUDA Backend Support**: - Run seamlessly without needing an Nvidia GPU! ## 📸 Screenshot ### Image Mode:  ### Video Mode: https://github.com/user-attachments/assets/1aef9ae6-7d05-4fea-a03c-2c3de76df8b1 ### Webcam Mode: **Note: FPS ~ 13 on a Mac laptop with noticeable delay.** https://github.com/user-attachments/assets/6a2ce4c5-e3f2-40cd-9fe9-c081407aaca1 ## 🚀 Getting Started ### 📦 Installation To install and use `mimic_head`, simply run the following command:
```bash pip install mimic_head ```
### 🛠️ Usage Once installed, you can start the application by running:
```bash mimic_head run ```
## 📚 Documentation
For detailed instructions and advanced usage, please refer to our [README](https://github.com/vra/mimic_head).
## 🤝 Contributing We welcome contributions! If you'd like to contribute, please fork the repository and use a feature branch. Pull requests are warmly welcomed.
1. Fork the Project 2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`) 3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`) 4. Push to the Branch (`git push origin feature/AmazingFeature`) 5. Open a Pull Request
## 🛡️ License This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 💬 Contact For any inquiries, questions, or issues, please open an issue in this repository or contact me at <wyf.brz@gmail.com>.
## 📝 Acknowledgments - Special thanks to the original creators of LivePortrait for their work. - Inspired by the amazing community contributions and ideas.
## ⭐ Support If you like this project, please give it a ⭐ on [GitHub](https://github.com/vra/mimic_head)!
---
Made with ❤️ by [Yunfeng Wang](https://github.com/vra).
error: Failed to download distributions Caused by: Failed to fetch wheel: grpcio==1.62.1 Caused by: Failed to extract source distribution Caused by: request or response body error: operation timed out Caused by: operation timed out